Re: [DISCUSS] Drop support for HBase-1.2

2019-05-10 Thread Ankit Singhal
+1

On Fri, May 10, 2019 at 4:44 PM Josh Elser  wrote:

> +1
>
> On 5/10/19 4:28 PM, Thomas D'Silva wrote:
> > Since HBase 1.2 is now end of life and we are creating a new branch to
> > support HBase-1.5(PHOENIX-5277), I think we should drop the HBase-1.2
> > branches. What do folks think?
> >
> > Thanks,
> > Thomas
> >
>


[jira] [Updated] (PHOENIX-5278) Add unit test to make sure drop/recreate of tenant view with added columns doesn't corrupt syscat

2019-05-10 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5278:
-
External issue URL:   (was: 
https://issues.apache.org/jira/browse/PHOENIX-3377)
 External issue ID: PHOENIX-3377

> Add unit test to make sure drop/recreate of tenant view with added columns 
> doesn't corrupt syscat
> -
>
> Key: PHOENIX-5278
> URL: https://issues.apache.org/jira/browse/PHOENIX-5278
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Saksham Gangwar
>Priority: Minor
>
> There have been scenarios similar to: deleting a tenant-specific view, 
> recreating the same tenant-specific view with new columns and while querying 
> the query fails with NPE over syscat due to corrupt data. View column count 
> is changed but Phoenix syscat table did not properly update this info which 
> causing querying the view always trigger null pointer exception. So the 
> addition of this unit test will help us further debug the exact issue of 
> corruption and give us confidence over this use case.
> Exception Stacktrace:
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: VIEW_NAME_ABC: at index 50
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:111)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:566)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6143)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3552)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3534)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException: at index 50
> at 
> com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)
> at com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)
> at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:548)
> at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)
> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1015)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:578)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3220)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3167)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:532)
> ... 10 more
>  
>  
> Related issue: https://issues.apache.org/jira/browse/PHOENIX-3377



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Drop support for HBase-1.2

2019-05-10 Thread Josh Elser

+1

On 5/10/19 4:28 PM, Thomas D'Silva wrote:

Since HBase 1.2 is now end of life and we are creating a new branch to
support HBase-1.5(PHOENIX-5277), I think we should drop the HBase-1.2
branches. What do folks think?

Thanks,
Thomas



[jira] [Updated] (PHOENIX-5278) Add unit test to make sure drop/recreate of tenant view with added columns doesn't corrupt syscat

2019-05-10 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5278:
-
External issue URL: https://issues.apache.org/jira/browse/PHOENIX-3377

> Add unit test to make sure drop/recreate of tenant view with added columns 
> doesn't corrupt syscat
> -
>
> Key: PHOENIX-5278
> URL: https://issues.apache.org/jira/browse/PHOENIX-5278
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Saksham Gangwar
>Priority: Minor
>
> There have been scenarios similar to: deleting a tenant-specific view, 
> recreating the same tenant-specific view with new columns and while querying 
> the query fails with NPE over syscat due to corrupt data. View column count 
> is changed but Phoenix syscat table did not properly update this info which 
> causing querying the view always trigger null pointer exception. So the 
> addition of this unit test will help us further debug the exact issue of 
> corruption and give us confidence over this use case.
> Exception Stacktrace:
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: VIEW_NAME_ABC: at index 50
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:111)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:566)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6143)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3552)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3534)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException: at index 50
> at 
> com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)
> at com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)
> at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:548)
> at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)
> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1015)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:578)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3220)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3167)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:532)
> ... 10 more
>  
>  
> Related issue: https://issues.apache.org/jira/browse/PHOENIX-3377



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5278) Add unit test to make sure drop/recreate of tenant view with added columns doesn't corrupt syscat

2019-05-10 Thread Saksham Gangwar (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saksham Gangwar updated PHOENIX-5278:
-
Description: 
There have been scenarios similar to: deleting a tenant-specific view, 
recreating the same tenant-specific view with new columns and while querying 
the query fails with NPE over syscat due to corrupt data. View column count is 
changed but Phoenix syscat table did not properly update this info which 
causing querying the view always trigger null pointer exception. So the 
addition of this unit test will help us further debug the exact issue of 
corruption and give us confidence over this use case.

Exception Stacktrace:

org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: VIEW_NAME_ABC: at index 50

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:111)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:566)

at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6143)

at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3552)

at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3534)

at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

at java.lang.Thread.run(Thread.java:748)

Caused by: java.lang.NullPointerException: at index 50

at 
com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)

at com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)

at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)

at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:548)

at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)

at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1015)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:578)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3220)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3167)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:532)

... 10 more

 

 

Related issue: https://issues.apache.org/jira/browse/PHOENIX-3377

  was:
There have been scenarios similar to: deleting a tenant-specific view, 
recreating the same tenant-specific view with new columns and while querying 
the query fails with NPE over syscat due to corrupt data. View column count is 
changed but Phoenix syscat table did not properly update this info which 
causing querying the view always trigger null pointer exception. So the 
addition of this unit test will help us further debug the exact issue of 
corruption and give us confidence over this use case.

Exception Stacktrace:

org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: VIEW_NAME_ABC: at index 50

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:111)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:566)

at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6143)

at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3552)

at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3534)

at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

at java.lang.Thread.run(Thread.java:748)

Caused by: java.lang.NullPointerException: at index 50

at 
com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)

at com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)

at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)

at 

[jira] [Updated] (PHOENIX-5278) Add unit test to make sure drop/recreate of tenant view with added columns doesn't corrupt syscat

2019-05-10 Thread Saksham Gangwar (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saksham Gangwar updated PHOENIX-5278:
-
Description: 
There have been scenarios similar to: deleting a tenant-specific view, 
recreating the same tenant-specific view with new columns and while querying 
the query fails with NPE over syscat due to corrupt data. View column count is 
changed but Phoenix syscat table did not properly update this info which 
causing querying the view always trigger null pointer exception. So the 
addition of this unit test will help us further debug the exact issue of 
corruption and give us confidence over this use case.

Exception Stacktrace:

org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: VIEW_NAME_ABC: at index 50

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:111)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:566)

at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6143)

at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3552)

at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3534)

at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

at java.lang.Thread.run(Thread.java:748)

Caused by: java.lang.NullPointerException: at index 50

at 
com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)

at com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)

at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)

at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:548)

at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)

at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1015)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:578)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3220)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3167)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:532)

... 10 more

  was:
There have been scenarios similar to: deleting a tenant-specific view, 
recreating the same tenant-specific view with new columns and while querying 
the query fails with NPE over syscat due to corrupt data. So the addition of 
this unit test will help us further debug the exact issue of corruption and 
give us confidence over this use case.

Exception Stacktrace:

org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: VIEW_NAME_ABC: at index 50

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:111)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:566)

at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6143)

at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3552)

at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3534)

at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

at java.lang.Thread.run(Thread.java:748)

Caused by: java.lang.NullPointerException: at index 50

at 
com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)

at com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)

at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)

at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:548)

at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)

at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)

at 

[jira] [Updated] (PHOENIX-5278) Add unit test to make sure drop/recreate of tenant view with added columns doesn't corrupt syscat

2019-05-10 Thread Saksham Gangwar (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saksham Gangwar updated PHOENIX-5278:
-
Description: 
There have been scenarios similar to: deleting a tenant-specific view, 
recreating the same tenant-specific view with new columns and while querying 
the query fails with NPE over syscat due to corrupt data. So the addition of 
this unit test will help us further debug the exact issue of corruption and 
give us confidence over this use case.

Exception Stacktrace:

org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: VIEW_NAME_ABC: at index 50

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:111)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:566)

at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6143)

at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3552)

at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3534)

at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

at java.lang.Thread.run(Thread.java:748)

Caused by: java.lang.NullPointerException: at index 50

at 
com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)

at com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)

at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)

at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:548)

at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)

at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1015)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:578)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3220)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3167)

at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:532)

... 10 more

  was:There have been scenarios similar to: deleting a tenant-specific view, 
recreating the same tenant-specific view with new columns and while querying 
the query fails with NPE over syscat due to corrupt data. So the addition of 
this unit test will help us further debug the exact issue of corruption and 
give us confidence over this use case.


> Add unit test to make sure drop/recreate of tenant view with added columns 
> doesn't corrupt syscat
> -
>
> Key: PHOENIX-5278
> URL: https://issues.apache.org/jira/browse/PHOENIX-5278
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Saksham Gangwar
>Priority: Minor
>
> There have been scenarios similar to: deleting a tenant-specific view, 
> recreating the same tenant-specific view with new columns and while querying 
> the query fails with NPE over syscat due to corrupt data. So the addition of 
> this unit test will help us further debug the exact issue of corruption and 
> give us confidence over this use case.
> Exception Stacktrace:
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: VIEW_NAME_ABC: at index 50
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:111)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:566)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6143)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3552)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3534)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at 

[jira] [Updated] (PHOENIX-5278) Add unit test to make sure drop/recreate of tenant view with added columns doesn't corrupt syscat

2019-05-10 Thread Saksham Gangwar (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saksham Gangwar updated PHOENIX-5278:
-
Description: There have been scenarios similar to: deleting a 
tenant-specific view, recreating the same tenant-specific view with new columns 
and while querying the query fails with NPE over syscat due to corrupt data. So 
the addition of this unit test will help us further debug the exact issue of 
corruption and give us confidence over this use case.  (was: There have been 
customer scenarios where their use case: deleting a tenant-specific view, 
recreating the same tenant-specific view with new columns and while querying 
the query fails with NPE over syscat due to corrupt data. So the addition of 
this unit test will help us further debug the exact issue of corruption and 
give us confidence over this use case.)

> Add unit test to make sure drop/recreate of tenant view with added columns 
> doesn't corrupt syscat
> -
>
> Key: PHOENIX-5278
> URL: https://issues.apache.org/jira/browse/PHOENIX-5278
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Saksham Gangwar
>Priority: Minor
>
> There have been scenarios similar to: deleting a tenant-specific view, 
> recreating the same tenant-specific view with new columns and while querying 
> the query fails with NPE over syscat due to corrupt data. So the addition of 
> this unit test will help us further debug the exact issue of corruption and 
> give us confidence over this use case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5278) Add unit test to make sure drop/recreate of tenant view with added columns doesn't corrupt syscat

2019-05-10 Thread Saksham Gangwar (JIRA)
Saksham Gangwar created PHOENIX-5278:


 Summary: Add unit test to make sure drop/recreate of tenant view 
with added columns doesn't corrupt syscat
 Key: PHOENIX-5278
 URL: https://issues.apache.org/jira/browse/PHOENIX-5278
 Project: Phoenix
  Issue Type: Bug
Reporter: Saksham Gangwar


There have been customer scenarios where their use case: deleting a 
tenant-specific view, recreating the same tenant-specific view with new columns 
and while querying the query fails with NPE over syscat due to corrupt data. So 
the addition of this unit test will help us further debug the exact issue of 
corruption and give us confidence over this use case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[DISCUSS] Drop support for HBase-1.2

2019-05-10 Thread Thomas D'Silva
Since HBase 1.2 is now end of life and we are creating a new branch to
support HBase-1.5(PHOENIX-5277), I think we should drop the HBase-1.2
branches. What do folks think?

Thanks,
Thomas


[jira] [Updated] (PHOENIX-4925) Use a Variant Segment tree to organize Guide Post Info

2019-05-10 Thread Bin Shi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bin Shi updated PHOENIX-4925:
-
Attachment: PHOENIX-4925.phoenix-stats.0510.patch

> Use a Variant Segment tree to organize Guide Post Info
> --
>
> Key: PHOENIX-4925
> URL: https://issues.apache.org/jira/browse/PHOENIX-4925
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Bin Shi
>Assignee: Bin Shi
>Priority: Major
> Attachments: PHOENIX-4925.phoenix-stats.0502.patch, 
> PHOENIX-4925.phoenix-stats.0510.patch
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> As reported, Query compilation (for the sample queries showed below), 
> especially deriving estimation and generating parallel scans from guide 
> posts, becomes much slower after we introduced Phoenix Stats. 
>  a. SELECT f1__c FROM MyCustomBigObject__b ORDER BY Pk1__c
>  b. SELECT f1__c FROM MyCustomBigObject__b WHERE nonpk1__c = ‘x’ ORDER BY 
> Pk1__c
>  c. SELECT f1__c FROM MyCustomBigObject__b WHERE pk2__c = ‘x’ ORDER BY 
> pk1__c,pk2__c
>  d. SELECT f1__c FROM MyCustomBigObject__b WHERE pk1__c = ‘x’ AND nonpk1__c 
> ORDER BY pk1__c,pk2__c
>  e. SELECT f1__c FROM MyCustomBigObject__b WHERE pk__c >= 'd' AND pk__c <= 
> 'm' OR pk__c >= 'o' AND pk__c <= 'x' ORDER BY pk__c // pk__c is the only 
> column to make the primary key.
>   
>  By using prefix encoding for guide post info, we have to decode and traverse 
> guide posts sequentially, which causes time complexity in 
> BaseResultIterators.getParallelScan(...) to be O( n ) , where n is the total 
> count of guide posts.
> According to PHOENIX-2417, to reduce footprint in client cache and over 
> transmition, the prefix encoding is used as in-memory and over-the-wire 
> encoding for guide post info.
> We can use Segment Tree to address both memory and performance concerns. The 
> guide posts are partitioned to k chunks (k=1024?), each chunk is encoded by 
> prefix encoding and the encoded data is a leaf node of the tree. The inner 
> node contains summary info (the count of rows, the data size) of the sub tree 
> rooted at the inner node.
> With this tree like data structure, compared to the current data structure, 
> the increased size (mainly coming from the n/k-1 inner nodes) is ignorable. 
> The time complexity for queries a, b, c can be reduced to O(m) where m is the 
> total count of regions; the time complexity for "EXPLAN" queries a, b, c can 
> be reduced to O(m) too, and if we support "EXPLAIN (ESTIMATE ONLY)", it can 
> even be reduced to O(1). For queries d and e, the time complexity to find the 
> start of target scan ranges can be reduced to O(log(n/k)).
> The tree can also integrate AVL and B+ characteristics to support partial 
> load/unload when interacting with stats client cache.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5266) Client can only write on Index Table and skip data table if failure happens because of region split/move etc

2019-05-10 Thread Mihir Monani (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mihir Monani updated PHOENIX-5266:
--
Fix Version/s: 5.1.0
   4.15.0

> Client can only write on Index Table and skip data table if failure happens 
> because of region split/move etc
> 
>
> Key: PHOENIX-5266
> URL: https://issues.apache.org/jira/browse/PHOENIX-5266
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.1, 5.1.0, 4.14.2
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5266-4.x-HBase-1.3.01.patch, 
> PHOENIX-5266-4.x-HBase-1.3.02.patch, PHOENIX-5266.01.patch, 
> PHOENIX-5266.patch, PHOENIX-5266.patch
>
>
> With Phoenix 4.14.1 client, There is a scenario where client would skip data 
> table write but do successful index table write. In this case, we should 
> treat it as Data loss scenario.
>  
> Relevant code path :-
> [https://github.com/apache/phoenix/blob/4.x-HBase-1.3/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L994-L1043]
> [https://github.com/apache/phoenix/blob/4.x-HBase-1.3/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L1089-L1109]
>  
> Here is what happens :-
>  * Consider below assumptions for scenario :- 
>  ** max no row in single batch = 100
>  ** max size of batch = 2 MB
>  * When client faces SQLException Code 1121, it sets variable 
> shouldRetryIndexedMutation=true.
>  * In scenarios where client sends batch of 100 rows only as per 
> configuration, but batch size is >2 MB, MutationState.java#991 will split 
> this 100 row batch into multiple smaller batches which are <2MB.
>  ** MutationState.java#991 :- 
> [https://github.com/apache/phoenix/blob/4.x-HBase-1.3/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L991]
>  * Suppose there are 5 batches of 20 rows but client faces 1121 
> SQLExceptionCode on 2nd batch , then it will set 
> shouldRetryIndexedMutation=true and it will retry all 5 batches again with 
> only Index updates. This will results in rows missing from Data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)