[jira] [Created] (PHOENIX-6729) Enable staing website for phoenix

2022-06-13 Thread Kiran Kumar Maturi (Jira)
Kiran Kumar Maturi created PHOENIX-6729:
---

 Summary: Enable staing website for phoenix 
 Key: PHOENIX-6729
 URL: https://issues.apache.org/jira/browse/PHOENIX-6729
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Kiran Kumar Maturi
Assignee: Kiran Kumar Maturi


Based on the instructions mentioned here: 
https://cwiki.apache.org/confluence/display/INFRA/Git+-+.asf.yaml+features#Git.asf.yamlfeatures-WebsitedeploymentserviceforGitrepositories
 . I am planning to test the changes by enabling this for the stage profile



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (PHOENIX-6522) Unique Id generation support queryId

2022-04-20 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi reassigned PHOENIX-6522:
---

Assignee: (was: Kiran Kumar Maturi)

> Unique Id generation support queryId
> 
>
> Key: PHOENIX-6522
> URL: https://issues.apache.org/jira/browse/PHOENIX-6522
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Kiran Kumar Maturi
>Priority: Major
>
> Sometimes user might want a queryId be generated for the query rather than 
> adding it. This feature will be config based if enabled it will generate 
> queryId for all the queries



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (PHOENIX-6341) Enable running IT tests from PHERF module during builds and patch checkins

2022-04-20 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi reassigned PHOENIX-6341:
---

Assignee: Kiran Kumar Maturi

> Enable running IT tests from PHERF module during builds and patch checkins
> --
>
> Key: PHOENIX-6341
> URL: https://issues.apache.org/jira/browse/PHOENIX-6341
> Project: Phoenix
>  Issue Type: Test
>Reporter: Jacob Isaac
>Assignee: Kiran Kumar Maturi
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (PHOENIX-6672) Move phoenix website from svn to git

2022-04-20 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi reassigned PHOENIX-6672:
---

Assignee: Kiran Kumar Maturi

> Move phoenix website from svn to git
> 
>
> Key: PHOENIX-6672
> URL: https://issues.apache.org/jira/browse/PHOENIX-6672
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Aman Poonia
>Assignee: Kiran Kumar Maturi
>Priority: Minor
>
> Currently we have our website hosted from svn. It is good to move it to git 
> to let other developers create PR as they do to any JIRA. This will help us 
> in improving the workflow to contribute to phoenix documentation



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (PHOENIX-6554) Pherf CLI option long/short option names do not follow conventions

2022-04-20 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi reassigned PHOENIX-6554:
---

Assignee: Kiran Kumar Maturi

> Pherf CLI option long/short option names do not follow conventions
> --
>
> Key: PHOENIX-6554
> URL: https://issues.apache.org/jira/browse/PHOENIX-6554
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Assignee: Kiran Kumar Maturi
>Priority: Minor
>
> The Pherf script does not use long and short option names consistently.
> for example:
> -t and --thin are for specifying the the thin PQS URL, 
> and 
> -z and --zookeeper are for the ZK quorum
> but 
> -schemaFile is used to specify the schema file, and 
> --schemaFile does not work.
> IMO options that look like long options should also be accepted with double 
> dash, or we could just invent new short options  for them (which would break 
> backwards compatibility).
> i.e. instead of 
> {code:java}
> options.addOption("schemaFile", true,
> "Regex or file name for the Test phoenix table schema .sql to 
> use.");
> {code}
> we could have one of the following:
> {code:java}
> options.addOption("sf", "schemaFile", true,
> "Regex or file name for the Test phoenix table schema .sql to 
> use.");
> options.addOption("schemaFile", "schemaFile", true,
> "Regex or file name for the Test phoenix table schema .sql to 
> use.");
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (PHOENIX-6530) Fix tenantId generation for Sequential and Uniform load generators

2022-04-19 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi reassigned PHOENIX-6530:
---

Assignee: Kiran Kumar Maturi

> Fix tenantId generation for Sequential and Uniform load generators
> --
>
> Key: PHOENIX-6530
> URL: https://issues.apache.org/jira/browse/PHOENIX-6530
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.17.0, 5.1.2
>Reporter: Jacob Isaac
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.17.0, 5.1.3
>
>
> While running the perf workloads for 4.16, found that tenantId generation for 
> the various generators do not match.
> As result the read queries fail when the writes/data was created using 
> different generator.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (PHOENIX-5276) Update Multi-tenancy section of the website to run sqlline by passing tenant id

2022-04-19 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi reassigned PHOENIX-5276:
---

Assignee: Kiran Kumar Maturi

> Update Multi-tenancy section of the website to run sqlline by passing tenant 
> id
> ---
>
> Key: PHOENIX-5276
> URL: https://issues.apache.org/jira/browse/PHOENIX-5276
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Swaroopa Kadam
>Assignee: Kiran Kumar Maturi
>Priority: Minor
>  Labels: newbie
>
> Currently, it has instructions to create tenant-specific connection using 
> Java application and create table syntax doesn't include 2 or more columns in 
> the PK. 
>  
> Update the website to include following sqlline command:
> ./bin/sqlline.py "localhost:2181;TenantId=abc"
> modify the example of create table to include primary key constraint. 
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6600) Replace deprecated getCall with updated getRpcCall

2021-11-25 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-6600:

Summary: Replace deprecated getCall with updated getRpcCall  (was: 
PhoenixRPCScheduler fails with NoSuchMethod getCall() for hbase-2)

> Replace deprecated getCall with updated getRpcCall
> --
>
> Key: PHOENIX-6600
> URL: https://issues.apache.org/jira/browse/PHOENIX-6600
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1, 5.1.2
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Critical
>
> If we are using PhoenixRPCScheduler. It currently fails with no such method 
> getCall. This is due to the CallRunner.getCall() being deprecated as part of 
> https://issues.apache.org/jira/browse/HBASE-17221
> {code:java}
> 2021-11-22 05:36:35,161 TRACE [-EventLoopGroup-1-49] ipc.NettyRpcServer - 
> Connection /10.231.90.110:55489; caught unexpected downstream exception.
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.hbase.ipc.CallRunner.getCall()Lorg/apache/hadoop/hbase/ipc/ServerCall;
> at 
> org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler.dispatch(PhoenixRpcScheduler.java:84)
> at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.processRequest(ServerRpcConnection.java:720)
> at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.processOneRpc(ServerRpcConnection.java:457)
> at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.saslReadAndProcess(ServerRpcConnection.java:344)
> at 
> org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:87)
> at 
> org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:63)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcServerRequestDecoder.channelRead(NettyRpcServerRequestDecoder.java:62)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
> at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
> at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6600) PhoenixRPCScheduler fails with NoSuchMethod getCall() for hbase-2

2021-11-22 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-6600:

Description: 
If we are using PhoenixRPCScheduler. It currently fails with no such method 
getCall. This is due to the CallRunner.getCall() being deprecated as part of 
https://issues.apache.org/jira/browse/HBASE-17221

{code:java}
2021-11-22 05:36:35,161 TRACE [-EventLoopGroup-1-49] ipc.NettyRpcServer - 
Connection /10.231.90.110:55489; caught unexpected downstream exception.
java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.ipc.CallRunner.getCall()Lorg/apache/hadoop/hbase/ipc/ServerCall;
at 
org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler.dispatch(PhoenixRpcScheduler.java:84)
at 
org.apache.hadoop.hbase.ipc.ServerRpcConnection.processRequest(ServerRpcConnection.java:720)
at 
org.apache.hadoop.hbase.ipc.ServerRpcConnection.processOneRpc(ServerRpcConnection.java:457)
at 
org.apache.hadoop.hbase.ipc.ServerRpcConnection.saslReadAndProcess(ServerRpcConnection.java:344)
at 
org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:87)
at 
org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:63)
at 
org.apache.hadoop.hbase.ipc.NettyRpcServerRequestDecoder.channelRead(NettyRpcServerRequestDecoder.java:62)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at 
org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
{code}


  was:
If we are using PhoenixRPCScheduler. It currently fails with no such method 
getCall. 

{code:java}
2021-11-22 05:36:35,161 TRACE [-EventLoopGroup-1-49] ipc.NettyRpcServer - 
Connection /10.231.90.110:55489; caught unexpected downstream exception.
java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.ipc.CallRunner.getCall()Lorg/apache/hadoop/hbase/ipc/ServerCall;
at 
org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler.dispatch(PhoenixRpcScheduler.java:84)
at 
org.apache.hadoop.hbase.ipc.ServerRpcConnection.processRequest(ServerRpcConnection.java:720)
at 
org.apache.hadoop.hbase.ipc.ServerRpcConnection.processOneRpc(ServerRpcConnection.java:457)
at 
org.apache.hadoop.hbase.ipc.ServerRpcConnection.saslReadAndProcess(ServerRpcConnection.java:344)
at 
org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:87)
at 
org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:63)
at 
org.apache.hadoop.hbase.ipc.NettyRpcServerRequestDecoder.channelRead(NettyRpcServerRequestDecoder.java:62)
at 

[jira] [Created] (PHOENIX-6600) PhoenixRPCScheduler fails with NoSuchMethod getCall() for hbase-2

2021-11-22 Thread Kiran Kumar Maturi (Jira)
Kiran Kumar Maturi created PHOENIX-6600:
---

 Summary: PhoenixRPCScheduler fails with NoSuchMethod getCall() for 
hbase-2
 Key: PHOENIX-6600
 URL: https://issues.apache.org/jira/browse/PHOENIX-6600
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.2, 5.1.1
Reporter: Kiran Kumar Maturi


If we are using PhoenixRPCScheduler. It currently fails with no such method 
getCall. 

{code:java}
2021-11-22 05:36:35,161 TRACE [-EventLoopGroup-1-49] ipc.NettyRpcServer - 
Connection /10.231.90.110:55489; caught unexpected downstream exception.
java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.ipc.CallRunner.getCall()Lorg/apache/hadoop/hbase/ipc/ServerCall;
at 
org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler.dispatch(PhoenixRpcScheduler.java:84)
at 
org.apache.hadoop.hbase.ipc.ServerRpcConnection.processRequest(ServerRpcConnection.java:720)
at 
org.apache.hadoop.hbase.ipc.ServerRpcConnection.processOneRpc(ServerRpcConnection.java:457)
at 
org.apache.hadoop.hbase.ipc.ServerRpcConnection.saslReadAndProcess(ServerRpcConnection.java:344)
at 
org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:87)
at 
org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:63)
at 
org.apache.hadoop.hbase.ipc.NettyRpcServerRequestDecoder.channelRead(NettyRpcServerRequestDecoder.java:62)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at 
org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
{code}




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (PHOENIX-6600) PhoenixRPCScheduler fails with NoSuchMethod getCall() for hbase-2

2021-11-22 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi reassigned PHOENIX-6600:
---

Assignee: Kiran Kumar Maturi

> PhoenixRPCScheduler fails with NoSuchMethod getCall() for hbase-2
> -
>
> Key: PHOENIX-6600
> URL: https://issues.apache.org/jira/browse/PHOENIX-6600
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1, 5.1.2
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Critical
>
> If we are using PhoenixRPCScheduler. It currently fails with no such method 
> getCall. 
> {code:java}
> 2021-11-22 05:36:35,161 TRACE [-EventLoopGroup-1-49] ipc.NettyRpcServer - 
> Connection /10.231.90.110:55489; caught unexpected downstream exception.
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.hbase.ipc.CallRunner.getCall()Lorg/apache/hadoop/hbase/ipc/ServerCall;
> at 
> org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler.dispatch(PhoenixRpcScheduler.java:84)
> at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.processRequest(ServerRpcConnection.java:720)
> at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.processOneRpc(ServerRpcConnection.java:457)
> at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.saslReadAndProcess(ServerRpcConnection.java:344)
> at 
> org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:87)
> at 
> org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:63)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcServerRequestDecoder.channelRead(NettyRpcServerRequestDecoder.java:62)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
> at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
> at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6581) Create a span based on TRACING_ENABLED configuration

2021-11-15 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-6581:

Description: 
A client can create multiple connections and might want only few connections to 
have tracing enabled. We can do this by controlling the span creation at parent 
level (executeUpdate/ executeQuery). Opentelemetry provides an SDK to better 
control on 
[sampling|https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#sampling].
 we will be creating a valid span based on the configuration (TRACING_ENABLED) 
using which the connection was creation.
This can be further improved by writing custom sampler as well

  was:In order to have better control on tracing. We should support tracing 
sampling rate at connection level.


> Create a span based on TRACING_ENABLED configuration
> 
>
> Key: PHOENIX-6581
> URL: https://issues.apache.org/jira/browse/PHOENIX-6581
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Minor
>
> A client can create multiple connections and might want only few connections 
> to have tracing enabled. We can do this by controlling the span creation at 
> parent level (executeUpdate/ executeQuery). Opentelemetry provides an SDK to 
> better control on 
> [sampling|https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#sampling].
>  we will be creating a valid span based on the configuration 
> (TRACING_ENABLED) using which the connection was creation.
> This can be further improved by writing custom sampler as well



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6581) Create a span based on TRACING_ENABLED configuration

2021-11-15 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-6581:

Summary: Create a span based on TRACING_ENABLED configuration  (was: Enable 
tracing sampling rate at connection level)

> Create a span based on TRACING_ENABLED configuration
> 
>
> Key: PHOENIX-6581
> URL: https://issues.apache.org/jira/browse/PHOENIX-6581
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Minor
>
> In order to have better control on tracing. We should support tracing 
> sampling rate at connection level.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (PHOENIX-6581) Enable tracing sampling rate at connection level

2021-10-25 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi reassigned PHOENIX-6581:
---

Assignee: Kiran Kumar Maturi

> Enable tracing sampling rate at connection level
> 
>
> Key: PHOENIX-6581
> URL: https://issues.apache.org/jira/browse/PHOENIX-6581
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Minor
>
> In order to have better control on tracing. We should support tracing 
> sampling rate at connection level.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6581) Enable tracing sampling rate at connection level

2021-10-25 Thread Kiran Kumar Maturi (Jira)
Kiran Kumar Maturi created PHOENIX-6581:
---

 Summary: Enable tracing sampling rate at connection level
 Key: PHOENIX-6581
 URL: https://issues.apache.org/jira/browse/PHOENIX-6581
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Kiran Kumar Maturi


In order to have better control on tracing. We should support tracing sampling 
rate at connection level.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6522) Unique Id generation support queryId

2021-07-30 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-6522:

Summary: Unique Id generation support queryId  (was: unique Id generation 
support queryId)

> Unique Id generation support queryId
> 
>
> Key: PHOENIX-6522
> URL: https://issues.apache.org/jira/browse/PHOENIX-6522
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> Sometimes user might want a queryId be generated for the query rather than 
> adding it. This feature will be config based if enabled it will generate 
> queryId for all the queries



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6522) unique Id generation support queryId

2021-07-30 Thread Kiran Kumar Maturi (Jira)
Kiran Kumar Maturi created PHOENIX-6522:
---

 Summary: unique Id generation support queryId
 Key: PHOENIX-6522
 URL: https://issues.apache.org/jira/browse/PHOENIX-6522
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Kiran Kumar Maturi
Assignee: Kiran Kumar Maturi


Sometimes user might want a queryId be generated for the query rather than 
adding it. This feature will be config based if enabled it will generate 
queryId for all the queries



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6521) QueryId support in Phoenix Coprocessor

2021-07-30 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-6521:

Description: To have better analysis for put (with indexes ) and scans, 
propagating query id at phoenix coprocessor would be necessary  (was: To have 
better analysis for put (with indexes ) and scans propagating query id at 
phoenix coprocessor would be necessary)

> QueryId support in Phoenix Coprocessor 
> ---
>
> Key: PHOENIX-6521
> URL: https://issues.apache.org/jira/browse/PHOENIX-6521
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> To have better analysis for put (with indexes ) and scans, propagating query 
> id at phoenix coprocessor would be necessary



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6521) QueryId support in Phoenix Coprocessor

2021-07-30 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-6521:

Parent: PHOENIX-5974
Issue Type: Sub-task  (was: New Feature)

> QueryId support in Phoenix Coprocessor 
> ---
>
> Key: PHOENIX-6521
> URL: https://issues.apache.org/jira/browse/PHOENIX-6521
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> To have better analysis for put (with indexes ) and scans propagating query 
> id at phoenix coprocessor would be necessary



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6521) QueryId support in Phoenix Coprocessor

2021-07-30 Thread Kiran Kumar Maturi (Jira)
Kiran Kumar Maturi created PHOENIX-6521:
---

 Summary: QueryId support in Phoenix Coprocessor 
 Key: PHOENIX-6521
 URL: https://issues.apache.org/jira/browse/PHOENIX-6521
 Project: Phoenix
  Issue Type: New Feature
Reporter: Kiran Kumar Maturi
Assignee: Kiran Kumar Maturi


To have better analysis for put (with indexes ) and scans propagating query id 
at phoenix coprocessor would be necessary



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6520) Support for logging QueryId

2021-07-30 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-6520:

Parent: PHOENIX-5974
Issue Type: Sub-task  (was: New Feature)

> Support for logging QueryId 
> 
>
> Key: PHOENIX-6520
> URL: https://issues.apache.org/jira/browse/PHOENIX-6520
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> For a Heavily loaded cluster phoenix client makes lot of queries. Currently 
> it is very difficult to know which query failed and what happened with the 
> request. Logging a 
> queryId([PHOENIX-5974|https://issues.apache.org/jira/browse/PHOENIX-5974])  
> along with the logs can help us to better debug. This feature will be config 
> based.
> I am planning to use [log4j Thread Context | 
> https://logging.apache.org/log4j/2.x/manual/thread-context.html] to store the 
> queryId and modify the log 
> [pattern|https://github.com/apache/phoenix/blob/master/phoenix-pherf/config/log4j.properties]
>  to show queryId when the feature is enabled



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6520) Support for logging QueryId

2021-07-30 Thread Kiran Kumar Maturi (Jira)
Kiran Kumar Maturi created PHOENIX-6520:
---

 Summary: Support for logging QueryId 
 Key: PHOENIX-6520
 URL: https://issues.apache.org/jira/browse/PHOENIX-6520
 Project: Phoenix
  Issue Type: New Feature
Reporter: Kiran Kumar Maturi
Assignee: Kiran Kumar Maturi


For a Heavily loaded cluster phoenix client makes lot of queries. Currently it 
is very difficult to know which query failed and what happened with the 
request. Logging a 
queryId([PHOENIX-5974|https://issues.apache.org/jira/browse/PHOENIX-5974])  
along with the logs can help us to better debug. This feature will be config 
based.

I am planning to use [log4j Thread Context | 
https://logging.apache.org/log4j/2.x/manual/thread-context.html] to store the 
queryId and modify the log 
[pattern|https://github.com/apache/phoenix/blob/master/phoenix-pherf/config/log4j.properties]
 to show queryId when the feature is enabled



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5974) QueryId for Phoenix Query

2021-07-30 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5974:

Description: Add a QueryId corresponding to a Phoenix Query which can be 
used to uniquely identify the query. Propagate this QueryId  further to HBase. 
Hbase supports 
[identifier|https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/OperationWithAttributes.html#getId--]
 for request . Hbase logs the identifier when request timesout or has a slow 
response. This information is very useful in associating client and server side 
information. Phoenix QueryId will help in analyzing client side logs also.  
(was: Add a QueryId corresponding to a Phoenix Query which can be used to 
uniquely identify the query. Propagate this QueryId  further to HBase. Hbase 
supports 
[identifier](https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/OperationWithAttributes.html#getId--)
 for request . Hbase logs the identifier when request timesout or has a slow 
response. This information is very useful in associating client and server side 
information. Phoenix QueryId will help in analyzing client side logs also.)

> QueryId for Phoenix Query
> -
>
> Key: PHOENIX-5974
> URL: https://issues.apache.org/jira/browse/PHOENIX-5974
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Pranshu Khandelwal
>Assignee: Kiran Kumar Maturi
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add a QueryId corresponding to a Phoenix Query which can be used to uniquely 
> identify the query. Propagate this QueryId  further to HBase. Hbase supports 
> [identifier|https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/OperationWithAttributes.html#getId--]
>  for request . Hbase logs the identifier when request timesout or has a slow 
> response. This information is very useful in associating client and server 
> side information. Phoenix QueryId will help in analyzing client side logs 
> also.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5974) QueryId for Phoenix Query

2021-07-30 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5974:

Description: Add a QueryId corresponding to a Phoenix Query which can be 
used to uniquely identify the query. Propagate this QueryId  further to HBase. 
Hbase supports 
[identifier](https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/OperationWithAttributes.html#getId--)
 for request . Hbase logs the identifier when request timesout or has a slow 
response. This information is very useful in associating client and server side 
information. Phoenix QueryId will help in analyzing client side logs also.  
(was: Add a TraceId corresponding to a Phoenix Query which translates further 
into an HBase mutationId. Using this TraceId one can log information about the 
bottlenecks in the Network and Trace out the flow of the SQL like Phoenix Query 
till it compiles as an Hbase mutation and ultimately into a Message to the RPC 
Layer.)

> QueryId for Phoenix Query
> -
>
> Key: PHOENIX-5974
> URL: https://issues.apache.org/jira/browse/PHOENIX-5974
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Pranshu Khandelwal
>Assignee: Kiran Kumar Maturi
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add a QueryId corresponding to a Phoenix Query which can be used to uniquely 
> identify the query. Propagate this QueryId  further to HBase. Hbase supports 
> [identifier](https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/OperationWithAttributes.html#getId--)
>  for request . Hbase logs the identifier when request timesout or has a slow 
> response. This information is very useful in associating client and server 
> side information. Phoenix QueryId will help in analyzing client side logs 
> also.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5974) QueryId for Phoenix Query

2021-07-30 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5974:

Summary: QueryId for Phoenix Query  (was: RequestId for Phoenix Query)

> QueryId for Phoenix Query
> -
>
> Key: PHOENIX-5974
> URL: https://issues.apache.org/jira/browse/PHOENIX-5974
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Pranshu Khandelwal
>Assignee: Kiran Kumar Maturi
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add a TraceId corresponding to a Phoenix Query which translates further into 
> an HBase mutationId. Using this TraceId one can log information about the 
> bottlenecks in the Network and Trace out the flow of the SQL like Phoenix 
> Query till it compiles as an Hbase mutation and ultimately into a Message to 
> the RPC Layer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5974) RequestId for Phoenix Query

2021-06-17 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi reassigned PHOENIX-5974:
---

Assignee: Kiran Kumar Maturi

> RequestId for Phoenix Query
> ---
>
> Key: PHOENIX-5974
> URL: https://issues.apache.org/jira/browse/PHOENIX-5974
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Pranshu Khandelwal
>Assignee: Kiran Kumar Maturi
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add a TraceId corresponding to a Phoenix Query which translates further into 
> an HBase mutationId. Using this TraceId one can log information about the 
> bottlenecks in the Network and Trace out the flow of the SQL like Phoenix 
> Query till it compiles as an Hbase mutation and ultimately into a Message to 
> the RPC Layer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5974) RequestId for Phoenix Query

2021-06-15 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5974:

Summary: RequestId for Phoenix Query  (was: RequestId Tracing feature for 
Phoenix Query)

> RequestId for Phoenix Query
> ---
>
> Key: PHOENIX-5974
> URL: https://issues.apache.org/jira/browse/PHOENIX-5974
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Pranshu Khandelwal
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add a TraceId corresponding to a Phoenix Query which translates further into 
> an HBase mutationId. Using this TraceId one can log information about the 
> bottlenecks in the Network and Trace out the flow of the SQL like Phoenix 
> Query till it compiles as an Hbase mutation and ultimately into a Message to 
> the RPC Layer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5215) Remove and replace HTrace

2020-08-12 Thread Kiran Kumar Maturi (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi reassigned PHOENIX-5215:
---

Assignee: Kiran Kumar Maturi

> Remove and replace HTrace
> -
>
> Key: PHOENIX-5215
> URL: https://issues.apache.org/jira/browse/PHOENIX-5215
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Andrew Kyle Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> HTrace is dead.
> Hadoop is discussing a replacement of HTrace with OpenTracing, see 
> HADOOP-15566 
> HBase is having the same discussion on HBASE-22120



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-06-19 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5269:

Attachment: PHOENIX-5269.4.14-HBase-1.3.v1.patch

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-5269-4.14-HBase-1.4.patch, 
> PHOENIX-5269-4.14-HBase-1.4.v1.patch, PHOENIX-5269-4.14-HBase-1.4.v2.patch, 
> PHOENIX-5269.4.14-HBase-1.3.v1.patch, PHOENIX-5269.4.14-HBase-1.4.v3.patch, 
> PHOENIX-5269.4.14-HBase-1.4.v4.patch, PHOENIX-5269.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5269.4.x-HBase-1.4.v1.patch, PHOENIX-5269.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5269.master.v1.patch, PHOENIX-5269.master.v2.patch, diff.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-06-18 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5269:

Attachment: PHOENIX-5269.4.x-HBase-1.3.v1.patch

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-5269-4.14-HBase-1.4.patch, 
> PHOENIX-5269-4.14-HBase-1.4.v1.patch, PHOENIX-5269-4.14-HBase-1.4.v2.patch, 
> PHOENIX-5269.4.14-HBase-1.4.v3.patch, PHOENIX-5269.4.14-HBase-1.4.v4.patch, 
> PHOENIX-5269.4.x-HBase-1.3.v1.patch, PHOENIX-5269.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5269.4.x-HBase-1.5.v1.patch, PHOENIX-5269.master.v1.patch, 
> PHOENIX-5269.master.v2.patch, diff.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-06-17 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5269:

Attachment: PHOENIX-5269.master.v2.patch

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-5269-4.14-HBase-1.4.patch, 
> PHOENIX-5269-4.14-HBase-1.4.v1.patch, PHOENIX-5269-4.14-HBase-1.4.v2.patch, 
> PHOENIX-5269.4.14-HBase-1.4.v3.patch, PHOENIX-5269.4.14-HBase-1.4.v4.patch, 
> PHOENIX-5269.4.x-HBase-1.4.v1.patch, PHOENIX-5269.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5269.master.v1.patch, PHOENIX-5269.master.v2.patch, diff.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-06-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5269:

Attachment: PHOENIX-5269.master.v1.patch

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-5269-4.14-HBase-1.4.patch, 
> PHOENIX-5269-4.14-HBase-1.4.v1.patch, PHOENIX-5269-4.14-HBase-1.4.v2.patch, 
> PHOENIX-5269.4.14-HBase-1.4.v3.patch, PHOENIX-5269.4.14-HBase-1.4.v4.patch, 
> PHOENIX-5269.4.x-HBase-1.4.v1.patch, PHOENIX-5269.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5269.master.v1.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-05-24 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5269:

Attachment: PHOENIX-5269.4.x-HBase-1.5.v1.patch

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Fix For: 4.14.2
>
> Attachments: PHOENIX-5269-4.14-HBase-1.4.patch, 
> PHOENIX-5269-4.14-HBase-1.4.v1.patch, PHOENIX-5269-4.14-HBase-1.4.v2.patch, 
> PHOENIX-5269.4.14-HBase-1.4.v3.patch, PHOENIX-5269.4.14-HBase-1.4.v4.patch, 
> PHOENIX-5269.4.x-HBase-1.4.v1.patch, PHOENIX-5269.4.x-HBase-1.5.v1.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-05-24 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5269:

Attachment: PHOENIX-5269.4.x-HBase-1.4.v1.patch

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Fix For: 4.14.2
>
> Attachments: PHOENIX-5269-4.14-HBase-1.4.patch, 
> PHOENIX-5269-4.14-HBase-1.4.v1.patch, PHOENIX-5269-4.14-HBase-1.4.v2.patch, 
> PHOENIX-5269.4.14-HBase-1.4.v3.patch, PHOENIX-5269.4.14-HBase-1.4.v4.patch, 
> PHOENIX-5269.4.x-HBase-1.4.v1.patch, PHOENIX-5269.4.x-HBase-1.5.v1.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-05-13 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5269:

Attachment: PHOENIX-5269.4.14-HBase-1.4.v4.patch

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Attachments: PHOENIX-5269-4.14-HBase-1.4.patch, 
> PHOENIX-5269-4.14-HBase-1.4.v1.patch, PHOENIX-5269-4.14-HBase-1.4.v2.patch, 
> PHOENIX-5269.4.14-HBase-1.4.v3.patch, PHOENIX-5269.4.14-HBase-1.4.v4.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-05-13 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5269:

Attachment: PHOENIX-5269.4.14-HBase-1.4.v3.patch

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Attachments: PHOENIX-5269-4.14-HBase-1.4.patch, 
> PHOENIX-5269-4.14-HBase-1.4.v1.patch, PHOENIX-5269-4.14-HBase-1.4.v2.patch, 
> PHOENIX-5269.4.14-HBase-1.4.v3.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-05-09 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5269:

Attachment: PHOENIX-5269-4.14-HBase-1.4.v2.patch

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Attachments: PHOENIX-5269-4.14-HBase-1.4.patch, 
> PHOENIX-5269-4.14-HBase-1.4.v1.patch, PHOENIX-5269-4.14-HBase-1.4.v2.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-05-09 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5269:

Attachment: PHOENIX-5269-4.14-HBase-1.4.v1.patch

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Attachments: PHOENIX-5269-4.14-HBase-1.4.patch, 
> PHOENIX-5269-4.14-HBase-1.4.v1.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-05-08 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5269:

Attachment: (was: PHOENIX-5269-4.14.1-HBase-1.4.patch)

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Attachments: PHOENIX-5269-4.14-HBase-1.4.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-05-08 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5269:

Attachment: PHOENIX-5269-4.14-HBase-1.4.patch

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Attachments: PHOENIX-5269-4.14-HBase-1.4.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-05-08 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5269:

Attachment: PHOENIX-5269-4.14.1-HBase-1.4.patch

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Attachments: PHOENIX-5269-4.14.1-HBase-1.4.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5267) With namespaces enabled Phoenix client times out with high loads

2019-05-02 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5267:

Description: 
Steps to reproduce:
 * Enable namespaces for phoenix 4.14.1 and hbase 1.3
 * Run high load using pherf client with 48 threads

After sometime the client hangs. and gives timeout exception
{code:java}
[pool-1-thread-1] WARN org.apache.phoenix.pherf.workload.WriteWorkload -

java.util.concurrent.ExecutionException: 
org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
callDuration=1238263: Call to  failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=857, waitTime=120001, 
operationTimeout=12 expired. row '^@TEST^@TABLE' on table 'SYSTEM:CATALOG' 
at region=SYSTEM:CATALOG,1556024429507.0f80d6de0a002d1421b8fd384e956254., 
hostname=, seqNum=2

at java.util.concurrent.FutureTask.report(FutureTask.java:122)

at java.util.concurrent.FutureTask.get(FutureTask.java:192)

at 
org.apache.phoenix.pherf.workload.WriteWorkload.waitForBatches(WriteWorkload.java:239)

at 
org.apache.phoenix.pherf.workload.WriteWorkload.exec(WriteWorkload.java:189)

at 
org.apache.phoenix.pherf.workload.WriteWorkload.access$100(WriteWorkload.java:56)

at 
org.apache.phoenix.pherf.workload.WriteWorkload$1.run(WriteWorkload.java:165)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

Caused by: org.apache.phoenix.exception.PhoenixIOException: 
callTimeout=120, callDuration=1238263: Call to  failed on local 
exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=857, 
waitTime=120001, operationTimeout=12 expired. row '^@TEST^@TABLE' on table 
'SYSTEM:CATALOG' at 
region=SYSTEM:CATALOG,1556024429507.0f80d6de0a002d1421b8fd384e956254., 
hostname=, seqNum=2

at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1379)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1343)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1560)

at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:644)

at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:538)

at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:530)

at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:526)

at 
org.apache.phoenix.execute.MutationState.validateAndGetServerTimestamp(MutationState.java:755)

at 
org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:743)

at org.apache.phoenix.execute.MutationState.send(MutationState.java:875)

at org.apache.phoenix.execute.MutationState.send(MutationState.java:1360)

at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1183)

at 
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:670)

at 
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:666)

at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

at 
org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:666)

at 
org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:297)

at 
org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:256)


{code}

  was:
Steps to reproduce:
 * Enable namespaces for phoenix 4.14.1 and hbase 1.3
 * Run high load using pherf client with 48 threads

After sometime the client hangs. and gives timeout exception
{code:java}
[pool-1-thread-1] WARN org.apache.phoenix.pherf.workload.WriteWorkload -

java.util.concurrent.ExecutionException: 
org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
callDuration=1238263: Call to  failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=857, waitTime=120001, 
operationTimeout=12 expired. row '^@test^@table' on table 'SYSTEM:CATALOG' 
at region=SYSTEM:CATALOG,1556024429507.0f80d6de0a002d1421b8fd384e956254., 
hostname=, seqNum=2

at java.util.concurrent.FutureTask.report(FutureTask.java:122)

at java.util.concurrent.FutureTask.get(FutureTask.java:192)

at 
org.apache.phoenix.pherf.workload.WriteWorkload.waitForBatches(WriteWorkload.java:239)

at 
org.apache.phoenix.pherf.workload.WriteWorkload.exec(WriteWorkload.java:189)

at 

[jira] [Created] (PHOENIX-5267) With namespaces enabled Phoenix client times out with high loads

2019-05-02 Thread Kiran Kumar Maturi (JIRA)
Kiran Kumar Maturi created PHOENIX-5267:
---

 Summary: With namespaces enabled Phoenix client times out with 
high loads
 Key: PHOENIX-5267
 URL: https://issues.apache.org/jira/browse/PHOENIX-5267
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1
Reporter: Kiran Kumar Maturi


Steps to reproduce:
 * Enable namespaces for phoenix 4.14.1 and hbase 1.3
 * Run high load using pherf client with 48 threads

After sometime the client hangs. and gives timeout exception
{code:java}
[pool-1-thread-1] WARN org.apache.phoenix.pherf.workload.WriteWorkload -

java.util.concurrent.ExecutionException: 
org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
callDuration=1238263: Call to  failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=857, waitTime=120001, 
operationTimeout=12 expired. row '^@test^@table' on table 'SYSTEM:CATALOG' 
at region=SYSTEM:CATALOG,1556024429507.0f80d6de0a002d1421b8fd384e956254., 
hostname=, seqNum=2

at java.util.concurrent.FutureTask.report(FutureTask.java:122)

at java.util.concurrent.FutureTask.get(FutureTask.java:192)

at 
org.apache.phoenix.pherf.workload.WriteWorkload.waitForBatches(WriteWorkload.java:239)

at 
org.apache.phoenix.pherf.workload.WriteWorkload.exec(WriteWorkload.java:189)

at 
org.apache.phoenix.pherf.workload.WriteWorkload.access$100(WriteWorkload.java:56)

at 
org.apache.phoenix.pherf.workload.WriteWorkload$1.run(WriteWorkload.java:165)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

Caused by: org.apache.phoenix.exception.PhoenixIOException: 
callTimeout=120, callDuration=1238263: Call to  failed on local 
exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=857, 
waitTime=120001, operationTimeout=12 expired. row '^@TEST^@TABLE' on table 
'SYSTEM:CATALOG' at 
region=SYSTEM:CATALOG,1556024429507.0f80d6de0a002d1421b8fd384e956254., 
hostname=, seqNum=2

at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1379)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1343)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1560)

at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:644)

at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:538)

at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:530)

at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:526)

at 
org.apache.phoenix.execute.MutationState.validateAndGetServerTimestamp(MutationState.java:755)

at 
org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:743)

at org.apache.phoenix.execute.MutationState.send(MutationState.java:875)

at org.apache.phoenix.execute.MutationState.send(MutationState.java:1360)

at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1183)

at 
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:670)

at 
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:666)

at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

at 
org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:666)

at 
org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:297)

at 
org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:256)


{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5189) Index Scrutiny Fails when data table field (type:double) value is null

2019-03-11 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5189:

Summary: Index Scrutiny Fails when data table field (type:double)  value is 
null  (was: Index Scrutiny Fails when data table field (type:double)  is null)

> Index Scrutiny Fails when data table field (type:double)  value is null
> ---
>
> Key: PHOENIX-5189
> URL: https://issues.apache.org/jira/browse/PHOENIX-5189
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Priority: Minor
>
> Steps to reproduce:
> 1. Create a data table
> {code}
> CREATE TABLE IF NOT EXISTS TEST(k1 CHAR(5) NOT NULL, k2 INTEGER NOT NULL, v1 
> DOUBLE, v2 VARCHAR(1),CONSTRAINT PK PRIMARY KEY(
> k1,
> k2
> ))
> {code}
> 2. Create index table
> {code}
> CREATE INDEX IF NOT EXISTS TEST_INDEX ON TEST (k1,v1) INCLUDE (v2)
> {code}
> 3. Write Data
> {code}
> UPSERT INTO TEST (k1, k2, v1, v2) VALUES ('0', 1, null, 'a' )
> {code}
> 4. Run Index Scrutiny Tool 
> {code}
> hbase org.apache.phoenix.mapreduce.index.IndexScrutinyTool -dt TEST -it 
> TEST_INDEX -src DATA_TABLE_SOURCE
> {code}
> Map reduce Job Logs will contain the Exception
> {code}
> 2019-03-12 05:10:51,085 INFO  [atcher event handler] impl.TaskAttemptImpl - 
> Diagnostics report from attempt_1550549758736_0287_m_00_1001: Error: 
> org.apache.phoenix.schema.IllegalDataException: java.sql.SQLException: ERROR 
> 201 (22000): Illegal data. DOUBLE may not be null
>   at 
> org.apache.phoenix.schema.types.PDataType.newIllegalDataException(PDataType.java:305)
>   at org.apache.phoenix.schema.types.PDouble.toBytes(PDouble.java:93)
>   at org.apache.phoenix.schema.types.PDouble.toBytes(PDouble.java:86)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.getPkHash(IndexScrutinyMapper.java:370)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.buildTargetStatement(IndexScrutinyMapper.java:250)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.processBatch(IndexScrutinyMapper.java:212)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.cleanup(IndexScrutinyMapper.java:185)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:149)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1760)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. DOUBLE may 
> not be null
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5189) Index Scrutiny Fails when data table field (type:double) is null

2019-03-11 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5189:

Summary: Index Scrutiny Fails when data table field (type:double)  is null  
(was: Index Scrutiny Fails when field of type double is null)

> Index Scrutiny Fails when data table field (type:double)  is null
> -
>
> Key: PHOENIX-5189
> URL: https://issues.apache.org/jira/browse/PHOENIX-5189
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Priority: Minor
>
> Steps to reproduce:
> 1. Create a data table
> {code}
> CREATE TABLE IF NOT EXISTS TEST(k1 CHAR(5) NOT NULL, k2 INTEGER NOT NULL, v1 
> DOUBLE, v2 VARCHAR(1),CONSTRAINT PK PRIMARY KEY(
> k1,
> k2
> ))
> {code}
> 2. Create index table
> {code}
> CREATE INDEX IF NOT EXISTS TEST_INDEX ON TEST (k1,v1) INCLUDE (v2)
> {code}
> 3. Write Data
> {code}
> UPSERT INTO TEST (k1, k2, v1, v2) VALUES ('0', 1, null, 'a' )
> {code}
> 4. Run Index Scrutiny Tool 
> {code}
> hbase org.apache.phoenix.mapreduce.index.IndexScrutinyTool -dt TEST -it 
> TEST_INDEX -src DATA_TABLE_SOURCE
> {code}
> Map reduce Job Logs will contain the Exception
> {code}
> 2019-03-12 05:10:51,085 INFO  [atcher event handler] impl.TaskAttemptImpl - 
> Diagnostics report from attempt_1550549758736_0287_m_00_1001: Error: 
> org.apache.phoenix.schema.IllegalDataException: java.sql.SQLException: ERROR 
> 201 (22000): Illegal data. DOUBLE may not be null
>   at 
> org.apache.phoenix.schema.types.PDataType.newIllegalDataException(PDataType.java:305)
>   at org.apache.phoenix.schema.types.PDouble.toBytes(PDouble.java:93)
>   at org.apache.phoenix.schema.types.PDouble.toBytes(PDouble.java:86)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.getPkHash(IndexScrutinyMapper.java:370)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.buildTargetStatement(IndexScrutinyMapper.java:250)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.processBatch(IndexScrutinyMapper.java:212)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.cleanup(IndexScrutinyMapper.java:185)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:149)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1760)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. DOUBLE may 
> not be null
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5189) Index Scrutiny Fails when field of type double is null

2019-03-11 Thread Kiran Kumar Maturi (JIRA)
Kiran Kumar Maturi created PHOENIX-5189:
---

 Summary: Index Scrutiny Fails when field of type double is null
 Key: PHOENIX-5189
 URL: https://issues.apache.org/jira/browse/PHOENIX-5189
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Kiran Kumar Maturi


Steps to reproduce:
1. Create a data table
{code}
CREATE TABLE IF NOT EXISTS TEST(k1 CHAR(5) NOT NULL, k2 INTEGER NOT NULL, v1 
DOUBLE, v2 VARCHAR(1),CONSTRAINT PK PRIMARY KEY(
k1,
k2
))
{code}

2. Create index table
{code}
CREATE INDEX IF NOT EXISTS TEST_INDEX ON TEST (k1,v1) INCLUDE (v2)
{code}

3. Write Data
{code}
UPSERT INTO TEST (k1, k2, v1, v2) VALUES ('0', 1, null, 'a' )
{code}

4. Run Index Scrutiny Tool 
{code}
hbase org.apache.phoenix.mapreduce.index.IndexScrutinyTool -dt TEST -it 
TEST_INDEX -src DATA_TABLE_SOURCE
{code}

Map reduce Job Logs will contain the Exception
{code}
2019-03-12 05:10:51,085 INFO  [atcher event handler] impl.TaskAttemptImpl - 
Diagnostics report from attempt_1550549758736_0287_m_00_1001: Error: 
org.apache.phoenix.schema.IllegalDataException: java.sql.SQLException: ERROR 
201 (22000): Illegal data. DOUBLE may not be null
at 
org.apache.phoenix.schema.types.PDataType.newIllegalDataException(PDataType.java:305)
at org.apache.phoenix.schema.types.PDouble.toBytes(PDouble.java:93)
at org.apache.phoenix.schema.types.PDouble.toBytes(PDouble.java:86)
at 
org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.getPkHash(IndexScrutinyMapper.java:370)
at 
org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.buildTargetStatement(IndexScrutinyMapper.java:250)
at 
org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.processBatch(IndexScrutinyMapper.java:212)
at 
org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.cleanup(IndexScrutinyMapper.java:185)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:149)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1760)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. DOUBLE may 
not be null
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-26 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Attachment: PHOENIX-5137-master.01.patch

> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5137-4.14-HBase-1.3.02.patch, 
> PHOENIX-5137-4.14-Hbase-1.3.01.patch, PHOENIX-5137-4.14-Hbase-1.3.01.patch, 
> PHOENIX-5137-4.x-HBase-1.3.01.patch, PHOENIX-5137-master.01.patch
>
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing at least once before committing 
> the batch
> {code:java}
> checkForRegionClosing();
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Steps to reproduce 
> 1. Create a table with one index (startime) 
> 2. Add 1-2 million rows 
> 3. Wait till the index is active 
> 4. Disable the index with start time (noted in step 1) 
> 5. Once the rebuilder starts split data table region 
> Repeat the steps again after applying the patch to check the difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-21 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Attachment: PHOENIX-5137-4.x-HBase-1.3.01.patch

> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5137-4.14-HBase-1.3.02.patch, 
> PHOENIX-5137-4.14-Hbase-1.3.01.patch, PHOENIX-5137-4.14-Hbase-1.3.01.patch, 
> PHOENIX-5137-4.x-HBase-1.3.01.patch
>
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing at least once before committing 
> the batch
> {code:java}
> checkForRegionClosing();
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Steps to reproduce 
> 1. Create a table with one index (startime) 
> 2. Add 1-2 million rows 
> 3. Wait till the index is active 
> 4. Disable the index with start time (noted in step 1) 
> 5. Once the rebuilder starts split data table region 
> Repeat the steps again after applying the patch to check the difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-21 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Attachment: (was: PHOENIX-5137-4.14-HBase-1.3.02.patch)

> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5137-4.14-HBase-1.3.02.patch, 
> PHOENIX-5137-4.14-Hbase-1.3.01.patch, PHOENIX-5137-4.14-Hbase-1.3.01.patch
>
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing at least once before committing 
> the batch
> {code:java}
> checkForRegionClosing();
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Steps to reproduce 
> 1. Create a table with one index (startime) 
> 2. Add 1-2 million rows 
> 3. Wait till the index is active 
> 4. Disable the index with start time (noted in step 1) 
> 5. Once the rebuilder starts split data table region 
> Repeat the steps again after applying the patch to check the difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-21 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Attachment: PHOENIX-5137-4.14-HBase-1.3.02.patch

> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5137-4.14-HBase-1.3.02.patch, 
> PHOENIX-5137-4.14-Hbase-1.3.01.patch, PHOENIX-5137-4.14-Hbase-1.3.01.patch
>
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing at least once before committing 
> the batch
> {code:java}
> checkForRegionClosing();
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Steps to reproduce 
> 1. Create a table with one index (startime) 
> 2. Add 1-2 million rows 
> 3. Wait till the index is active 
> 4. Disable the index with start time (noted in step 1) 
> 5. Once the rebuilder starts split data table region 
> Repeat the steps again after applying the patch to check the difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-21 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Attachment: PHOENIX-5137-4.14-HBase-1.3.02.patch

> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5137-4.14-HBase-1.3.02.patch, 
> PHOENIX-5137-4.14-Hbase-1.3.01.patch, PHOENIX-5137-4.14-Hbase-1.3.01.patch
>
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing at least once before committing 
> the batch
> {code:java}
> checkForRegionClosing();
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Steps to reproduce 
> 1. Create a table with one index (startime) 
> 2. Add 1-2 million rows 
> 3. Wait till the index is active 
> 4. Disable the index with start time (noted in step 1) 
> 5. Once the rebuilder starts split data table region 
> Repeat the steps again after applying the patch to check the difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-19 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Description: 
[~lhofhansl] [~vincentpoon] [~tdsilva] please review

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing at least once before committing 
the batch
{code:java}
checkForRegionClosing();
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}

Steps to reproduce 
1. Create a table with one index (startime) 
2. Add 1-2 million rows 
3. Wait till the index is active 
4. Disable the index with start time (noted in step 1) 
5. Once the rebuilder starts split data table region 

Repeat the steps again after applying the patch to check the difference.


  was:
[~lhofhansl] [~vincentpoon] [~tdsilva] please review

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing at least once before committing 
the batch
{code:java}
checkForRegionClosing();
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}



> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5137-4.14-Hbase-1.3.01.patch, 
> PHOENIX-5137-4.14-Hbase-1.3.01.patch
>
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing at least once before committing 
> the batch
> {code:java}
> checkForRegionClosing();
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Steps to reproduce 
> 1. Create a table with one index (startime) 
> 2. Add 1-2 million rows 
> 3. Wait till the index is active 
> 4. Disable the index with start time (noted in step 1) 
> 5. Once the rebuilder starts split data table region 
> Repeat the steps again after applying the patch to check the difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-18 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Description: 
[~lhofhansl] [~vincentpoon] [~tdsilva] please review

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing at least once before committing 
the batch
{code:java}
checkForRegionClosing();
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}


  was:
[~lhofhansl] [~vincentpoon] [~tdsilva] please review

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing at least once before committing 
the batch
{code:java}
int i = 0;
do {
   try {
 if (i > 0) {
 Thread.sleep(100); 
 }
 checkForRegionClosing();   
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IOException(e);
}
}while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i++ < 30);
{code}



> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5137-4.14-Hbase-1.3.01.patch, 
> PHOENIX-5137-4.14-Hbase-1.3.01.patch
>
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing at least once before committing 
> the batch
> {code:java}
> checkForRegionClosing();
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-14 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Attachment: PHOENIX-5137-4.14-Hbase-1.3.01.patch

> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5137-4.14-Hbase-1.3.01.patch
>
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing at least once before committing 
> the batch
> {code:java}
> int i = 0;
> do {
>try {
>  if (i > 0) {
>  Thread.sleep(100); 
>  }
>  checkForRegionClosing();   
> } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> throw new IOException(e);
> }
> }while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i++ < 30);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-13 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Description: 
[~lhofhansl] [~vincentpoon] [~tdsilva] please review

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing at least once before committing 
the batch
{code:java}
int i = 0;
do {
   try {
 if (i > 0) {
 Thread.sleep(100); 
 }
 checkForRegionClosing();   
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IOException(e);
}
}while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i++ < 30);
{code}


  was:
[~lhofhansl] [~vincentpoon] [~tdsilva] please review

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing irrespective of the 
blockingMemstoreSize
{code:java}
int i = 0;
do {
   try {
 if (i > 0) {
 Thread.sleep(100); 
 }
 checkForRegionClosing();   
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IOException(e);
}
}while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i++ < 30);
{code}



> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing at least once before committing 
> the batch
> {code:java}
> int i = 0;
> do {
>try {
>  if (i > 0) {
>  Thread.sleep(100); 
>  }
>  checkForRegionClosing();   
> } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> throw new IOException(e);
> }
> }while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i++ < 30);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-13 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Summary: Index Rebuilder scan increases data table region split time  (was: 
Index Rebuilder blocks data table region split)

> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> [~lhofhansl] [~vincentpoon] [~tdsilva]
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing irrespective of the 
> blockingMemstoreSize
> {code:java}
> int i = 0;
> do {
>try {
>  if (i > 0) {
>  Thread.sleep(100); 
>  }
>  checkForRegionClosing();   
> } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> throw new IOException(e);
> }
> }while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i++ < 30);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-13 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Description: 
[~lhofhansl] [~vincentpoon] [~tdsilva] please review

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing irrespective of the 
blockingMemstoreSize
{code:java}
int i = 0;
do {
   try {
 if (i > 0) {
 Thread.sleep(100); 
 }
 checkForRegionClosing();   
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IOException(e);
}
}while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i++ < 30);
{code}


  was:
[~lhofhansl] [~vincentpoon] [~tdsilva]

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing irrespective of the 
blockingMemstoreSize
{code:java}
int i = 0;
do {
   try {
 if (i > 0) {
 Thread.sleep(100); 
 }
 checkForRegionClosing();   
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IOException(e);
}
}while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i++ < 30);
{code}



> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing irrespective of the 
> blockingMemstoreSize
> {code:java}
> int i = 0;
> do {
>try {
>  if (i > 0) {
>  Thread.sleep(100); 
>  }
>  checkForRegionClosing();   
> } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> throw new IOException(e);
> }
> }while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i++ < 30);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5137) Index Rebuild blocks data table region split

2019-02-13 Thread Kiran Kumar Maturi (JIRA)
Kiran Kumar Maturi created PHOENIX-5137:
---

 Summary: Index Rebuild blocks data table region split
 Key: PHOENIX-5137
 URL: https://issues.apache.org/jira/browse/PHOENIX-5137
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1
Reporter: Kiran Kumar Maturi
Assignee: Kiran Kumar Maturi


[~lhofhansl] [~vincentpoon] [~tdsilva]

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing irrespective of the 
blockingMemstoreSize
{code:java}
int i = 0;
do {
   try {
 if (i > 0) {
 Thread.sleep(100); 
 }
 checkForRegionClosing();   
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IOException(e);
}
}while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i++ < 30);
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder blocks data table region split

2019-02-13 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Summary: Index Rebuilder blocks data table region split  (was: Index 
Rebuild blocks data table region split)

> Index Rebuilder blocks data table region split
> --
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> [~lhofhansl] [~vincentpoon] [~tdsilva]
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing irrespective of the 
> blockingMemstoreSize
> {code:java}
> int i = 0;
> do {
>try {
>  if (i > 0) {
>  Thread.sleep(100); 
>  }
>  checkForRegionClosing();   
> } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> throw new IOException(e);
> }
> }while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i++ < 30);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-02-01 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5094:

Attachment: PHOENIX-5094-master.03.patch

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch, 
> PHOENIX-5094-4.14-HBase-1.3.02.patch, PHOENIX-5094-4.14-HBase-1.3.03.patch, 
> PHOENIX-5094-4.14-HBase-1.3.04.patch, PHOENIX-5094-4.14-HBase-1.3.05.patch, 
> PHOENIX-5094-master.01.patch, PHOENIX-5094-master.02.patch, 
> PHOENIX-5094-master.03.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-02-01 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5094:

Attachment: PHOENIX-5094-4.14-HBase-1.3.05.patch

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch, 
> PHOENIX-5094-4.14-HBase-1.3.02.patch, PHOENIX-5094-4.14-HBase-1.3.03.patch, 
> PHOENIX-5094-4.14-HBase-1.3.04.patch, PHOENIX-5094-4.14-HBase-1.3.05.patch, 
> PHOENIX-5094-master.01.patch, PHOENIX-5094-master.02.patch, 
> PHOENIX-5094-master.03.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2019-02-01 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-master.addendum-1.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch, 
> PHOENIX-4993-4.x-HBase-1.3.04.patch, PHOENIX-4993-4.x-HBase-1.3.05.patch, 
> PHOENIX-4993-master.01.patch, PHOENIX-4993-master.02.patch, 
> PHOENIX-4993-master.addendum-1.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-01-29 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5094:

Attachment: PHOENIX-5094-4.14-HBase-1.3.04.patch

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch, 
> PHOENIX-5094-4.14-HBase-1.3.02.patch, PHOENIX-5094-4.14-HBase-1.3.03.patch, 
> PHOENIX-5094-4.14-HBase-1.3.04.patch, PHOENIX-5094-master.01.patch, 
> PHOENIX-5094-master.02.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-01-29 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5094:

Attachment: (was: PHOENIX-5094-4.14-HBase-1.3.03.patch)

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch, 
> PHOENIX-5094-4.14-HBase-1.3.02.patch, PHOENIX-5094-4.14-HBase-1.3.03.patch, 
> PHOENIX-5094-4.14-HBase-1.3.04.patch, PHOENIX-5094-master.01.patch, 
> PHOENIX-5094-master.02.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-01-29 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5094:

Attachment: PHOENIX-5094-4.14-HBase-1.3.03.patch

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch, 
> PHOENIX-5094-4.14-HBase-1.3.02.patch, PHOENIX-5094-4.14-HBase-1.3.03.patch, 
> PHOENIX-5094-4.14-HBase-1.3.03.patch, PHOENIX-5094-master.01.patch, 
> PHOENIX-5094-master.02.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-01-29 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5094:

Attachment: PHOENIX-5094-master.02.patch

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch, 
> PHOENIX-5094-4.14-HBase-1.3.02.patch, PHOENIX-5094-4.14-HBase-1.3.03.patch, 
> PHOENIX-5094-4.14-HBase-1.3.03.patch, PHOENIX-5094-master.01.patch, 
> PHOENIX-5094-master.02.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-01-29 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5094:

Attachment: PHOENIX-5094-4.14-HBase-1.3.03.patch

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch, 
> PHOENIX-5094-4.14-HBase-1.3.02.patch, PHOENIX-5094-4.14-HBase-1.3.03.patch, 
> PHOENIX-5094-master.01.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2019-01-29 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-master.02.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch, 
> PHOENIX-4993-4.x-HBase-1.3.04.patch, PHOENIX-4993-4.x-HBase-1.3.05.patch, 
> PHOENIX-4993-master.01.patch, PHOENIX-4993-master.02.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-01-28 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5094:

Attachment: PHOENIX-5094-master.01.patch

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch, 
> PHOENIX-5094-4.14-HBase-1.3.02.patch, PHOENIX-5094-master.01.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2019-01-28 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-master.01.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch, 
> PHOENIX-4993-4.x-HBase-1.3.04.patch, PHOENIX-4993-4.x-HBase-1.3.05.patch, 
> PHOENIX-4993-master.01.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-01-27 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5094:

Attachment: PHOENIX-5094-4.14-HBase-1.3.02.patch

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch, 
> PHOENIX-5094-4.14-HBase-1.3.02.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-01-23 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5094:

Attachment: PHOENIX-5094-4.14-HBase-1.3.01.patch

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-01-23 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5094:

Attachment: (was: PHOENIX-5094-4.14-HBase-1.3.01.patch)

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-01-22 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5094:

Attachment: PHOENIX-5094-4.14-HBase-1.3.01.patch

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2019-01-16 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-4.x-HBase-1.3.05.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch, 
> PHOENIX-4993-4.x-HBase-1.3.04.patch, PHOENIX-4993-4.x-HBase-1.3.05.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-01-16 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi reassigned PHOENIX-5094:
---

Assignee: Kiran Kumar Maturi

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2019-01-16 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-4.x-HBase-1.3.04.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch, 
> PHOENIX-4993-4.x-HBase-1.3.04.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5073) Invalid PIndexState during rolling upgrade from 4.13 to 4.14

2019-01-16 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5073:

Attachment: PHOENIX-5073-4.14-HBase-1.3.01.patch

> Invalid PIndexState during rolling upgrade from 4.13 to 4.14
> 
>
> Key: PHOENIX-5073
> URL: https://issues.apache.org/jira/browse/PHOENIX-5073
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1
>
> Attachments: PHOENIX-5073-4.14-HBase-1.3.01.patch, 
> PHOENIX-5073-4.x-HBase-1.3.001.patch, PHOENIX-5073-4.x-HBase-1.3.002.patch, 
> PHOENIX-5073-4.x-HBase-1.3.003.1.patch, 
> PHOENIX-5073-4.x-HBase-1.3.003.2.patch, PHOENIX-5073-4.x-HBase-1.3.003.patch, 
> PHOENIX-5073-master-01.patch
>
>
> While doing a rolling upgrade from 4.13 to 4.14 we are seeing this exception. 
> {code:java}
> 2018-08-20 09:00:34,980 WARN  [pool-1-thread-1] workload.WriteWorkload - 
> java.util.concurrent.ExecutionException: java.sql.SQLException: 
> java.lang.IllegalArgumentException: Unable to PIndexState enum for serialized 
> value of 'w'
>     at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>     at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.waitForBatches(WriteWorkload.java:233)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.exec(WriteWorkload.java:183)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.access$100(WriteWorkload.java:56)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$1.run(WriteWorkload.java:159)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: java.lang.IllegalArgumentException: Unable 
> to PIndexState enum for serialized value of 'w'
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1322)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1284)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1501)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:581)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:504)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:496)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:492)
>     at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:780)
>     at 
> org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:768)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:980)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:1469)
>     at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1301)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:539)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:536)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:536)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:291)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:250)
>     ... 4 more
> Caused by: java.lang.IllegalArgumentException: Unable to PIndexState enum for 
> serialized value of 'w'
>     at 
> org.apache.phoenix.schema.PIndexState.fromSerializedValue(PIndexState.java:81)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1222)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1246)
>     at 
> org.apache.phoenix.coprocessor.MetaDataProtocol$MetaDataMutationResult.constructFromProto(MetaDataProtocol.java:330)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1314){code}
>  
> Steps to reproduce.
>  # Start the server on 4.14
>  # Start load with both 4.13 and 4.14 clients
>  # 4.13 client will show the above error (it will only when Index state 
> transtition to PENDING_DISABLE , this state is not defined in 4.13) 
>  



--
This message was sent by 

[jira] [Updated] (PHOENIX-5073) Invalid PIndexState during rolling upgrade from 4.13 to 4.14

2019-01-08 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5073:

Attachment: PHOENIX-5073-master-01.patch

> Invalid PIndexState during rolling upgrade from 4.13 to 4.14
> 
>
> Key: PHOENIX-5073
> URL: https://issues.apache.org/jira/browse/PHOENIX-5073
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1
>
> Attachments: PHOENIX-5073-4.x-HBase-1.3.001.patch, 
> PHOENIX-5073-4.x-HBase-1.3.002.patch, PHOENIX-5073-4.x-HBase-1.3.003.1.patch, 
> PHOENIX-5073-4.x-HBase-1.3.003.2.patch, PHOENIX-5073-4.x-HBase-1.3.003.patch, 
> PHOENIX-5073-master-01.patch
>
>
> While doing a rolling upgrade from 4.13 to 4.14 we are seeing this exception. 
> {code:java}
> 2018-08-20 09:00:34,980 WARN  [pool-1-thread-1] workload.WriteWorkload - 
> java.util.concurrent.ExecutionException: java.sql.SQLException: 
> java.lang.IllegalArgumentException: Unable to PIndexState enum for serialized 
> value of 'w'
>     at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>     at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.waitForBatches(WriteWorkload.java:233)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.exec(WriteWorkload.java:183)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.access$100(WriteWorkload.java:56)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$1.run(WriteWorkload.java:159)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: java.lang.IllegalArgumentException: Unable 
> to PIndexState enum for serialized value of 'w'
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1322)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1284)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1501)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:581)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:504)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:496)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:492)
>     at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:780)
>     at 
> org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:768)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:980)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:1469)
>     at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1301)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:539)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:536)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:536)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:291)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:250)
>     ... 4 more
> Caused by: java.lang.IllegalArgumentException: Unable to PIndexState enum for 
> serialized value of 'w'
>     at 
> org.apache.phoenix.schema.PIndexState.fromSerializedValue(PIndexState.java:81)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1222)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1246)
>     at 
> org.apache.phoenix.coprocessor.MetaDataProtocol$MetaDataMutationResult.constructFromProto(MetaDataProtocol.java:330)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1314){code}
>  
> Steps to reproduce.
>  # Start the server on 4.14
>  # Start load with both 4.13 and 4.14 clients
>  # 4.13 client will show the above error (it will only when Index state 
> transtition to PENDING_DISABLE , this state is not defined in 4.13) 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5073) Invalid PIndexState during rolling upgrade from 4.13 to 4.14

2019-01-03 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5073:

Attachment: PHOENIX-5073-4.x-HBase-1.3.003.2.patch

> Invalid PIndexState during rolling upgrade from 4.13 to 4.14
> 
>
> Key: PHOENIX-5073
> URL: https://issues.apache.org/jira/browse/PHOENIX-5073
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5073-4.x-HBase-1.3.001.patch, 
> PHOENIX-5073-4.x-HBase-1.3.002.patch, PHOENIX-5073-4.x-HBase-1.3.003.1.patch, 
> PHOENIX-5073-4.x-HBase-1.3.003.2.patch, PHOENIX-5073-4.x-HBase-1.3.003.patch
>
>
> While doing a rolling upgrade from 4.13 to 4.14 we are seeing this exception. 
> {code:java}
> 2018-08-20 09:00:34,980 WARN  [pool-1-thread-1] workload.WriteWorkload - 
> java.util.concurrent.ExecutionException: java.sql.SQLException: 
> java.lang.IllegalArgumentException: Unable to PIndexState enum for serialized 
> value of 'w'
>     at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>     at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.waitForBatches(WriteWorkload.java:233)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.exec(WriteWorkload.java:183)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.access$100(WriteWorkload.java:56)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$1.run(WriteWorkload.java:159)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: java.lang.IllegalArgumentException: Unable 
> to PIndexState enum for serialized value of 'w'
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1322)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1284)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1501)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:581)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:504)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:496)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:492)
>     at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:780)
>     at 
> org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:768)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:980)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:1469)
>     at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1301)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:539)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:536)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:536)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:291)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:250)
>     ... 4 more
> Caused by: java.lang.IllegalArgumentException: Unable to PIndexState enum for 
> serialized value of 'w'
>     at 
> org.apache.phoenix.schema.PIndexState.fromSerializedValue(PIndexState.java:81)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1222)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1246)
>     at 
> org.apache.phoenix.coprocessor.MetaDataProtocol$MetaDataMutationResult.constructFromProto(MetaDataProtocol.java:330)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1314){code}
>  
> Steps to reproduce.
>  # Start the server on 4.14
>  # Start load with both 4.13 and 4.14 clients
>  # 4.13 client will show the above error (it will only when Index state 
> transtition to PENDING_DISABLE , this state is not defined in 4.13) 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5073) Invalid PIndexState during rolling upgrade from 4.13 to 4.14

2018-12-28 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5073:

Attachment: PHOENIX-5073-4.x-HBase-1.3.003.patch

> Invalid PIndexState during rolling upgrade from 4.13 to 4.14
> 
>
> Key: PHOENIX-5073
> URL: https://issues.apache.org/jira/browse/PHOENIX-5073
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5073-4.x-HBase-1.3.001.patch, 
> PHOENIX-5073-4.x-HBase-1.3.002.patch, PHOENIX-5073-4.x-HBase-1.3.003.patch
>
>
> While doing a rolling upgrade from 4.13 to 4.14 we are seeing this exception. 
> {code:java}
> 2018-08-20 09:00:34,980 WARN  [pool-1-thread-1] workload.WriteWorkload - 
> java.util.concurrent.ExecutionException: java.sql.SQLException: 
> java.lang.IllegalArgumentException: Unable to PIndexState enum for serialized 
> value of 'w'
>     at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>     at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.waitForBatches(WriteWorkload.java:233)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.exec(WriteWorkload.java:183)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.access$100(WriteWorkload.java:56)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$1.run(WriteWorkload.java:159)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: java.lang.IllegalArgumentException: Unable 
> to PIndexState enum for serialized value of 'w'
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1322)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1284)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1501)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:581)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:504)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:496)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:492)
>     at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:780)
>     at 
> org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:768)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:980)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:1469)
>     at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1301)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:539)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:536)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:536)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:291)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:250)
>     ... 4 more
> Caused by: java.lang.IllegalArgumentException: Unable to PIndexState enum for 
> serialized value of 'w'
>     at 
> org.apache.phoenix.schema.PIndexState.fromSerializedValue(PIndexState.java:81)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1222)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1246)
>     at 
> org.apache.phoenix.coprocessor.MetaDataProtocol$MetaDataMutationResult.constructFromProto(MetaDataProtocol.java:330)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1314){code}
>  
> Steps to reproduce.
>  # Start the server on 4.14
>  # Start load with both 4.13 and 4.14 clients
>  # 4.13 client will show the above error (it will only when Index state 
> transtition to PENDING_DISABLE , this state is not defined in 4.13) 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5073) Invalid PIndexState during rolling upgrade from 4.13 to 4.14

2018-12-27 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5073:

Attachment: PHOENIX-5073-4.x-HBase-1.3.002.patch

> Invalid PIndexState during rolling upgrade from 4.13 to 4.14
> 
>
> Key: PHOENIX-5073
> URL: https://issues.apache.org/jira/browse/PHOENIX-5073
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5073-4.x-HBase-1.3.001.patch, 
> PHOENIX-5073-4.x-HBase-1.3.002.patch
>
>
> While doing a rolling upgrade from 4.13 to 4.14 we are seeing this exception. 
> {code:java}
> 2018-08-20 09:00:34,980 WARN  [pool-1-thread-1] workload.WriteWorkload - 
> java.util.concurrent.ExecutionException: java.sql.SQLException: 
> java.lang.IllegalArgumentException: Unable to PIndexState enum for serialized 
> value of 'w'
>     at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>     at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.waitForBatches(WriteWorkload.java:233)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.exec(WriteWorkload.java:183)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.access$100(WriteWorkload.java:56)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$1.run(WriteWorkload.java:159)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: java.lang.IllegalArgumentException: Unable 
> to PIndexState enum for serialized value of 'w'
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1322)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1284)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1501)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:581)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:504)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:496)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:492)
>     at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:780)
>     at 
> org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:768)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:980)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:1469)
>     at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1301)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:539)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:536)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:536)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:291)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:250)
>     ... 4 more
> Caused by: java.lang.IllegalArgumentException: Unable to PIndexState enum for 
> serialized value of 'w'
>     at 
> org.apache.phoenix.schema.PIndexState.fromSerializedValue(PIndexState.java:81)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1222)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1246)
>     at 
> org.apache.phoenix.coprocessor.MetaDataProtocol$MetaDataMutationResult.constructFromProto(MetaDataProtocol.java:330)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1314){code}
>  
> Steps to reproduce.
>  # Start the server on 4.14
>  # Start load with both 4.13 and 4.14 clients
>  # 4.13 client will show the above error (it will only when Index state 
> transtition to PENDING_DISABLE , this state is not defined in 4.13) 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5073) Invalid PIndexState during rolling upgrade from 4.13 to 4.14

2018-12-27 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5073:

Attachment: (was: PHOENIX-5073-4.x-HBase-1.3.002.patch)

> Invalid PIndexState during rolling upgrade from 4.13 to 4.14
> 
>
> Key: PHOENIX-5073
> URL: https://issues.apache.org/jira/browse/PHOENIX-5073
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5073-4.x-HBase-1.3.001.patch
>
>
> While doing a rolling upgrade from 4.13 to 4.14 we are seeing this exception. 
> {code:java}
> 2018-08-20 09:00:34,980 WARN  [pool-1-thread-1] workload.WriteWorkload - 
> java.util.concurrent.ExecutionException: java.sql.SQLException: 
> java.lang.IllegalArgumentException: Unable to PIndexState enum for serialized 
> value of 'w'
>     at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>     at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.waitForBatches(WriteWorkload.java:233)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.exec(WriteWorkload.java:183)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.access$100(WriteWorkload.java:56)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$1.run(WriteWorkload.java:159)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: java.lang.IllegalArgumentException: Unable 
> to PIndexState enum for serialized value of 'w'
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1322)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1284)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1501)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:581)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:504)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:496)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:492)
>     at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:780)
>     at 
> org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:768)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:980)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:1469)
>     at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1301)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:539)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:536)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:536)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:291)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:250)
>     ... 4 more
> Caused by: java.lang.IllegalArgumentException: Unable to PIndexState enum for 
> serialized value of 'w'
>     at 
> org.apache.phoenix.schema.PIndexState.fromSerializedValue(PIndexState.java:81)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1222)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1246)
>     at 
> org.apache.phoenix.coprocessor.MetaDataProtocol$MetaDataMutationResult.constructFromProto(MetaDataProtocol.java:330)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1314){code}
>  
> Steps to reproduce.
>  # Start the server on 4.14
>  # Start load with both 4.13 and 4.14 clients
>  # 4.13 client will show the above error (it will only when Index state 
> transtition to PENDING_DISABLE , this state is not defined in 4.13) 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5073) Invalid PIndexState during rolling upgrade from 4.13 to 4.14

2018-12-27 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5073:

Attachment: PHOENIX-5073-4.x-HBase-1.3.002.patch

> Invalid PIndexState during rolling upgrade from 4.13 to 4.14
> 
>
> Key: PHOENIX-5073
> URL: https://issues.apache.org/jira/browse/PHOENIX-5073
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5073-4.x-HBase-1.3.001.patch, 
> PHOENIX-5073-4.x-HBase-1.3.002.patch
>
>
> While doing a rolling upgrade from 4.13 to 4.14 we are seeing this exception. 
> {code:java}
> 2018-08-20 09:00:34,980 WARN  [pool-1-thread-1] workload.WriteWorkload - 
> java.util.concurrent.ExecutionException: java.sql.SQLException: 
> java.lang.IllegalArgumentException: Unable to PIndexState enum for serialized 
> value of 'w'
>     at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>     at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.waitForBatches(WriteWorkload.java:233)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.exec(WriteWorkload.java:183)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.access$100(WriteWorkload.java:56)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$1.run(WriteWorkload.java:159)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: java.lang.IllegalArgumentException: Unable 
> to PIndexState enum for serialized value of 'w'
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1322)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1284)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1501)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:581)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:504)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:496)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:492)
>     at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:780)
>     at 
> org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:768)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:980)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:1469)
>     at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1301)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:539)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:536)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:536)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:291)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:250)
>     ... 4 more
> Caused by: java.lang.IllegalArgumentException: Unable to PIndexState enum for 
> serialized value of 'w'
>     at 
> org.apache.phoenix.schema.PIndexState.fromSerializedValue(PIndexState.java:81)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1222)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1246)
>     at 
> org.apache.phoenix.coprocessor.MetaDataProtocol$MetaDataMutationResult.constructFromProto(MetaDataProtocol.java:330)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1314){code}
>  
> Steps to reproduce.
>  # Start the server on 4.14
>  # Start load with both 4.13 and 4.14 clients
>  # 4.13 client will show the above error (it will only when Index state 
> transtition to PENDING_DISABLE , this state is not defined in 4.13) 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5073) Invalid PIndexState during rolling upgrade from 4.13 to 4.14

2018-12-26 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5073:

Attachment: PHOENIX-5073-4.x-HBase-1.3.001.patch

> Invalid PIndexState during rolling upgrade from 4.13 to 4.14
> 
>
> Key: PHOENIX-5073
> URL: https://issues.apache.org/jira/browse/PHOENIX-5073
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5073-4.x-HBase-1.3.001.patch
>
>
> While doing a rolling upgrade from 4.13 to 4.14 we are seeing this exception. 
> {code:java}
> 2018-08-20 09:00:34,980 WARN  [pool-1-thread-1] workload.WriteWorkload - 
> java.util.concurrent.ExecutionException: java.sql.SQLException: 
> java.lang.IllegalArgumentException: Unable to PIndexState enum for serialized 
> value of 'w'
>     at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>     at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.waitForBatches(WriteWorkload.java:233)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.exec(WriteWorkload.java:183)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.access$100(WriteWorkload.java:56)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$1.run(WriteWorkload.java:159)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: java.lang.IllegalArgumentException: Unable 
> to PIndexState enum for serialized value of 'w'
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1322)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1284)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1501)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:581)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:504)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:496)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:492)
>     at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:780)
>     at 
> org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:768)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:980)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:1469)
>     at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1301)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:539)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:536)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:536)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:291)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:250)
>     ... 4 more
> Caused by: java.lang.IllegalArgumentException: Unable to PIndexState enum for 
> serialized value of 'w'
>     at 
> org.apache.phoenix.schema.PIndexState.fromSerializedValue(PIndexState.java:81)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1222)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1246)
>     at 
> org.apache.phoenix.coprocessor.MetaDataProtocol$MetaDataMutationResult.constructFromProto(MetaDataProtocol.java:330)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1314){code}
>  
> Steps to reproduce.
>  # Start the server on 4.14
>  # Start load with both 4.13 and 4.14 clients
>  # 4.13 client will show the above error (it will only when Index state 
> transtition to PENDING_DISABLE , this state is not defined in 4.13) 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5073) Invalid PIndexState during rolling upgrade from 4.13 to 4.14

2018-12-19 Thread Kiran Kumar Maturi (JIRA)
Kiran Kumar Maturi created PHOENIX-5073:
---

 Summary: Invalid PIndexState during rolling upgrade from 4.13 to 
4.14
 Key: PHOENIX-5073
 URL: https://issues.apache.org/jira/browse/PHOENIX-5073
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Kiran Kumar Maturi
Assignee: Kiran Kumar Maturi


While doing a rolling upgrade from 4.13 to 4.14 we are seeing this exception. 
{code:java}
java.util.concurrent.ExecutionException: java.sql.SQLException: 
java.lang.IllegalArgumentException: Unable to PIndexState enum for serialized 
value of 'w'{code}
Steps to reproduce.
 # Start the server on 4.14
 # Start load with both 4.13 and 4.14 clients
 # 4.13 client will show the above error

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-18 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-4.x-HBase-1.3.03.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-13 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-4.x-HBase-1.3.02.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-11 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: (was: PHOENIX-4993-4.x-HBase-1.3.01.patch)

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-11 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-4.x-HBase-1.3.01.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.01.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-11 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-4.x-HBase-1.3.01.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-11 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: (was: PHOENIX-4993-4.x-HBase-1.3.01.patch)

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-4.x-HBase-1.3.01.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: (was: PHOENIX-4993-4.x-HBase-1.3.01.patch)

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: (was: PHOENIX-4993-v1.patch)

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-4.x-HBase-1.3.01.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-v1.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: (was: PHOENIX-4993.patch)

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-v1.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-v1.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-v1.patch, PHOENIX-4993.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-10-24 Thread Kiran Kumar Maturi (JIRA)
Kiran Kumar Maturi created PHOENIX-4993:
---

 Summary: Data table region should not close RS level shared/cached 
connections like IndexWriter, RecoveryIndexWriter
 Key: PHOENIX-4993
 URL: https://issues.apache.org/jira/browse/PHOENIX-4993
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Kiran Kumar Maturi


Issue is related to Region Server being killed when one region is closing and 
another region is trying to write index updates.

When the data table region closes it will close region server level 
cached/shared connections and it could interrupt other region index/index-state 
update.

-- Region1: Closing
{code:java}
TrackingParallellWriterIndexCommitter#stop() {

this.retryingFactory.shutdown();

this.noRetriesFactory.shutdown();

}{code}
closes the cached connections calling 
CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java

 

--Region2: Writing index updates

Index updates fail as connections are closed, which leads to 
RejectedExecutionException/Connection being null. This triggers 
PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the the 
syscat table using the cached connections. Here it will not be able to reach to 
SYSCAT , so we will trigger KillServreFailurePolicy.

CoprocessorHConnectionTableFactory#getTable()

 

 
{code:java}
if (connection == null || connection.isClosed()) {

throw new IllegalArgumentException("Connection is null or closed.");

}{code}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)