[jira] [Updated] (PHOENIX-4906) Introduce a coprocessor to handle cases where we can block merge for regions of salted table when it is problemetic

2023-03-07 Thread Aman Poonia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4906:
-
Summary: Introduce a coprocessor to handle cases where we can block merge 
for regions of salted table when it is problemetic  (was: Abnormal query result 
due to merging regions of a salted table)

> Introduce a coprocessor to handle cases where we can block merge for regions 
> of salted table when it is problemetic
> ---
>
> Key: PHOENIX-4906
> URL: https://issues.apache.org/jira/browse/PHOENIX-4906
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.14.0
>Reporter: JeongMin Ju
>Assignee: Aman Poonia
>Priority: Critical
> Attachments: SaltingWithRegionMergeIT.java, 
> ScanRanges_intersectScan.png, TestSaltingWithRegionMerge.java, 
> initial_salting_region.png, merged-region.png
>
>
> For a salted table, when a query is made for an entire data target, a 
> different plan is created depending on the type of the query, and as a 
> result, erroneous data is retrieved as a result.
> {code:java}
> // Actually, the schema of the table I used is different, but please ignore 
> it.
> create table if not exists test.test_tale (
>   rk1 varchar not null,
>   rk2 varchar not null,
>   column1 varchar
>   constraint pk primary key (rk1, rk2)
> )
> ...
> SALT_BUCKETS=16...
> ;
> {code}
>  
> I created a table with 16 salting regions and then wrote a lot of data.
>  HBase automatically split the region and I did the merging regions for data 
> balancing between the region servers.
> Then, when run the query, you can see that another plan is created according 
> to the Where clause.
>  * query1
>  select count\(*) from test.test_table;
> {code:java}
> +---+-++
> |PLAN 
>   | EST_BYTES_READ  | EST_ROWS_READ  |
> +---+-++
> | CLIENT 1851-CHUNK 5005959292 ROWS 1944546675532 BYTES PARALLEL 11-WAY FULL 
> SCAN OVER TEST:TEST_TABLE  | 1944546675532   | 5005959292 |
> | SERVER FILTER BY FIRST KEY ONLY 
>   | 1944546675532   | 5005959292 |
> | SERVER AGGREGATE INTO SINGLE ROW
>   | 1944546675532   | 5005959292 |
> +---+-++
> {code}
>  * query2
>  select count\(*) from test.test_table where rk2 = 'aa';
> {code}
> +---+-++
> |  PLAN   
>   | EST_BYTES_READ  | EST_ROWS_READ  |
> +---+-++
> | CLIENT 1846-CHUNK 4992196444 ROWS 1939177965768 BYTES PARALLEL 11-WAY RANGE 
> SCAN OVER TEST:TEST_TABLE [0] - [15]  | 1939177965768   | 4992196444 |
> | SERVER FILTER BY FIRST KEY ONLY AND RK2 = 'aa'  
>   | 1939177965768   | 4992196444 |
> | SERVER AGGREGATE INTO SINGLE ROW
>   | 1939177965768   | 4992196444 |
> +---+-++
> {code}
> Since rk2 used in the where clause of query2 is the second column of the PK, 
> it must be a full scan query like query1.
> However, as you can see, query2 is created by range scan and the generated 
> chunk is also less than five compared to query1.
> I added the log and printed out the startkey and endkey of the scan object 
> generated by the plan.
> And I found 5 chunks missing by query2.
> All five missing chunks were found in regions where the originally generated 
> region boundary value was not maintained through the merge operation.
> !initial_salting_region.png!
> After merging regions
> !merged-region.png!
> The code that caused the problem is this part.
>  When a select query is executed, the 
> 

[jira] [Assigned] (PHOENIX-6871) Write threads blocked with deadlock

2023-02-09 Thread Aman Poonia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia reassigned PHOENIX-6871:


Assignee: Aman Poonia

> Write threads blocked with deadlock
> ---
>
> Key: PHOENIX-6871
> URL: https://issues.apache.org/jira/browse/PHOENIX-6871
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.0, 5.2.0, 5.1.3
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
>
> Found one Java-level deadlock: = 
> "RpcServer.default.FPBQ.Fifo.handler=202,queue=20,port=61020": waiting for 
> ownable synchronizer 0x7f93d40944f0, (a 
> java.util.concurrent.locks.ReentrantLock$FairSync), which is held by 
> "RpcServer.default.FPBQ.Fifo.handler=38,queue=12,port=61020" 
> "RpcServer.default.FPBQ.Fifo.handler=38,queue=12,port=61020": waiting for 
> ownable synchronizer 0x7f93d40cd570, (a 
> java.util.concurrent.locks.ReentrantLock$FairSync), which is held by 
> "RpcServer.default.FPBQ.Fifo.handler=202,queue=20,port=61020" Java stack 
> information for the threads listed above: 
> === 
> "RpcServer.default.FPBQ.Fifo.handler=202,queue=20,port=61020": at 
> sun.misc.Unsafe.park(Native Method) - parking to wait for 
> <0x7f93d40944f0> (a java.util.concurrent.locks.ReentrantLock$FairSync) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:936)
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1250)
>  at java.util.concurrent.locks.ReentrantLock.tryLock(ReentrantLock.java:447) 
> at org.apache.phoenix.hbase.index.LockManager.lockRow(LockManager.java:89) at 
> org.apache.phoenix.hbase.index.IndexRegionObserver.lockRows(IndexRegionObserver.java:557)
>  at 
> org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutateWithExceptions(IndexRegionObserver.java:1167)
>  at 
> org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutate(IndexRegionObserver.java:460)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1024)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1742)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1827)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1783)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1020)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3543)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3273) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3215) 
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:967)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:895)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2524)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36812)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2432) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291) 
> "RpcServer.default.FPBQ.Fifo.handler=38,queue=12,port=61020": at 
> sun.misc.Unsafe.park(Native Method) - parking to wait for 
> <0x7f93d40cd570> (a java.util.concurrent.locks.ReentrantLock$FairSync) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:936)
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1250)
>  at java.util.concurrent.locks.ReentrantLock.tryLock(ReentrantLock.java:447) 
> at org.apache.phoenix.hbase.index.LockManager.lockRow(LockManager.java:89) at 
> org.apache.phoenix.hbase.index.IndexRegionObserver.lockRows(IndexRegionObserver.java:557)
>  at 
> org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutateWithExceptions(IndexRegionObserver.java:1167)
>  at 
> org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutate(IndexRegionObserver.java:460)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1024)
>  at 
> 

[jira] [Created] (PHOENIX-6871) Write threads blocked with deadlock

2023-02-09 Thread Aman Poonia (Jira)
Aman Poonia created PHOENIX-6871:


 Summary: Write threads blocked with deadlock
 Key: PHOENIX-6871
 URL: https://issues.apache.org/jira/browse/PHOENIX-6871
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.3, 4.16.0, 5.2.0
Reporter: Aman Poonia


Found one Java-level deadlock: = 
"RpcServer.default.FPBQ.Fifo.handler=202,queue=20,port=61020": waiting for 
ownable synchronizer 0x7f93d40944f0, (a 
java.util.concurrent.locks.ReentrantLock$FairSync), which is held by 
"RpcServer.default.FPBQ.Fifo.handler=38,queue=12,port=61020" 
"RpcServer.default.FPBQ.Fifo.handler=38,queue=12,port=61020": waiting for 
ownable synchronizer 0x7f93d40cd570, (a 
java.util.concurrent.locks.ReentrantLock$FairSync), which is held by 
"RpcServer.default.FPBQ.Fifo.handler=202,queue=20,port=61020" Java stack 
information for the threads listed above: 
=== 
"RpcServer.default.FPBQ.Fifo.handler=202,queue=20,port=61020": at 
sun.misc.Unsafe.park(Native Method) - parking to wait for <0x7f93d40944f0> 
(a java.util.concurrent.locks.ReentrantLock$FairSync) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:936)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1250)
 at java.util.concurrent.locks.ReentrantLock.tryLock(ReentrantLock.java:447) at 
org.apache.phoenix.hbase.index.LockManager.lockRow(LockManager.java:89) at 
org.apache.phoenix.hbase.index.IndexRegionObserver.lockRows(IndexRegionObserver.java:557)
 at 
org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutateWithExceptions(IndexRegionObserver.java:1167)
 at 
org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutate(IndexRegionObserver.java:460)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1024)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1742)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1827)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1783)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1020)
 at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3543)
 at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3273) 
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3215) 
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:967)
 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:895)
 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2524)
 at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36812)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2432) at 
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311) at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291) 
"RpcServer.default.FPBQ.Fifo.handler=38,queue=12,port=61020": at 
sun.misc.Unsafe.park(Native Method) - parking to wait for <0x7f93d40cd570> 
(a java.util.concurrent.locks.ReentrantLock$FairSync) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:936)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1250)
 at java.util.concurrent.locks.ReentrantLock.tryLock(ReentrantLock.java:447) at 
org.apache.phoenix.hbase.index.LockManager.lockRow(LockManager.java:89) at 
org.apache.phoenix.hbase.index.IndexRegionObserver.lockRows(IndexRegionObserver.java:557)
 at 
org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutateWithExceptions(IndexRegionObserver.java:1167)
 at 
org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutate(IndexRegionObserver.java:460)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1024)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1742)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1827)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1783)
 at 

[jira] [Assigned] (PHOENIX-4906) Abnormal query result due to merging regions of a salted table

2023-01-03 Thread Aman Poonia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia reassigned PHOENIX-4906:


Assignee: Aman Poonia

> Abnormal query result due to merging regions of a salted table
> --
>
> Key: PHOENIX-4906
> URL: https://issues.apache.org/jira/browse/PHOENIX-4906
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.14.0
>Reporter: JeongMin Ju
>Assignee: Aman Poonia
>Priority: Critical
> Attachments: SaltingWithRegionMergeIT.java, 
> ScanRanges_intersectScan.png, TestSaltingWithRegionMerge.java, 
> initial_salting_region.png, merged-region.png
>
>
> For a salted table, when a query is made for an entire data target, a 
> different plan is created depending on the type of the query, and as a 
> result, erroneous data is retrieved as a result.
> {code:java}
> // Actually, the schema of the table I used is different, but please ignore 
> it.
> create table if not exists test.test_tale (
>   rk1 varchar not null,
>   rk2 varchar not null,
>   column1 varchar
>   constraint pk primary key (rk1, rk2)
> )
> ...
> SALT_BUCKETS=16...
> ;
> {code}
>  
> I created a table with 16 salting regions and then wrote a lot of data.
>  HBase automatically split the region and I did the merging regions for data 
> balancing between the region servers.
> Then, when run the query, you can see that another plan is created according 
> to the Where clause.
>  * query1
>  select count\(*) from test.test_table;
> {code:java}
> +---+-++
> |PLAN 
>   | EST_BYTES_READ  | EST_ROWS_READ  |
> +---+-++
> | CLIENT 1851-CHUNK 5005959292 ROWS 1944546675532 BYTES PARALLEL 11-WAY FULL 
> SCAN OVER TEST:TEST_TABLE  | 1944546675532   | 5005959292 |
> | SERVER FILTER BY FIRST KEY ONLY 
>   | 1944546675532   | 5005959292 |
> | SERVER AGGREGATE INTO SINGLE ROW
>   | 1944546675532   | 5005959292 |
> +---+-++
> {code}
>  * query2
>  select count\(*) from test.test_table where rk2 = 'aa';
> {code}
> +---+-++
> |  PLAN   
>   | EST_BYTES_READ  | EST_ROWS_READ  |
> +---+-++
> | CLIENT 1846-CHUNK 4992196444 ROWS 1939177965768 BYTES PARALLEL 11-WAY RANGE 
> SCAN OVER TEST:TEST_TABLE [0] - [15]  | 1939177965768   | 4992196444 |
> | SERVER FILTER BY FIRST KEY ONLY AND RK2 = 'aa'  
>   | 1939177965768   | 4992196444 |
> | SERVER AGGREGATE INTO SINGLE ROW
>   | 1939177965768   | 4992196444 |
> +---+-++
> {code}
> Since rk2 used in the where clause of query2 is the second column of the PK, 
> it must be a full scan query like query1.
> However, as you can see, query2 is created by range scan and the generated 
> chunk is also less than five compared to query1.
> I added the log and printed out the startkey and endkey of the scan object 
> generated by the plan.
> And I found 5 chunks missing by query2.
> All five missing chunks were found in regions where the originally generated 
> region boundary value was not maintained through the merge operation.
> !initial_salting_region.png!
> After merging regions
> !merged-region.png!
> The code that caused the problem is this part.
>  When a select query is executed, the 
> [org.apache.phoenix.iterate.BaseResultIterators#getParallelScans|https://github.com/apache/phoenix/blob/v4.11.0-HBase-1.2/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java#L743-L744]
>  method creates a Scan object based on the GuidePost in the statistics table. 
> In the case of a GuidePost that contains a region boundary, 

[jira] [Assigned] (PHOENIX-6851) Use spotless to format code in phoenix

2023-01-02 Thread Aman Poonia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia reassigned PHOENIX-6851:


Assignee: Aman Poonia

> Use spotless to format code in phoenix
> --
>
> Key: PHOENIX-6851
> URL: https://issues.apache.org/jira/browse/PHOENIX-6851
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
>
> Similar to what hbase does we can use spotless plugin to format code.
> Idea is to include spotless check as part of mvn install so if format is 
> wrong mvn install will fail similar to what hbase has.
>  
> More info in
> https://issues.apache.org/jira/browse/HBASE-26617



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6851) Use spotless to format code in phoenix

2023-01-02 Thread Aman Poonia (Jira)
Aman Poonia created PHOENIX-6851:


 Summary: Use spotless to format code in phoenix
 Key: PHOENIX-6851
 URL: https://issues.apache.org/jira/browse/PHOENIX-6851
 Project: Phoenix
  Issue Type: Improvement
Reporter: Aman Poonia


Similar to what hbase does we can use spotless plugin to format code.

Idea is to include spotless check as part of mvn install so if format is wrong 
mvn install will fail similar to what hbase has.

 

More info in

https://issues.apache.org/jira/browse/HBASE-26617



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-4906) Abnormal query result due to merging regions of a salted table

2022-12-18 Thread Aman Poonia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4906:
-
Summary: Abnormal query result due to merging regions of a salted table  
(was: Abnormal query result due to Phoenix plan error)

> Abnormal query result due to merging regions of a salted table
> --
>
> Key: PHOENIX-4906
> URL: https://issues.apache.org/jira/browse/PHOENIX-4906
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.14.0
>Reporter: JeongMin Ju
>Priority: Critical
> Attachments: SaltingWithRegionMergeIT.java, 
> ScanRanges_intersectScan.png, TestSaltingWithRegionMerge.java, 
> initial_salting_region.png, merged-region.png
>
>
> For a salted table, when a query is made for an entire data target, a 
> different plan is created depending on the type of the query, and as a 
> result, erroneous data is retrieved as a result.
> {code:java}
> // Actually, the schema of the table I used is different, but please ignore 
> it.
> create table if not exists test.test_tale (
>   rk1 varchar not null,
>   rk2 varchar not null,
>   column1 varchar
>   constraint pk primary key (rk1, rk2)
> )
> ...
> SALT_BUCKETS=16...
> ;
> {code}
>  
> I created a table with 16 salting regions and then wrote a lot of data.
>  HBase automatically split the region and I did the merging regions for data 
> balancing between the region servers.
> Then, when run the query, you can see that another plan is created according 
> to the Where clause.
>  * query1
>  select count\(*) from test.test_table;
> {code:java}
> +---+-++
> |PLAN 
>   | EST_BYTES_READ  | EST_ROWS_READ  |
> +---+-++
> | CLIENT 1851-CHUNK 5005959292 ROWS 1944546675532 BYTES PARALLEL 11-WAY FULL 
> SCAN OVER TEST:TEST_TABLE  | 1944546675532   | 5005959292 |
> | SERVER FILTER BY FIRST KEY ONLY 
>   | 1944546675532   | 5005959292 |
> | SERVER AGGREGATE INTO SINGLE ROW
>   | 1944546675532   | 5005959292 |
> +---+-++
> {code}
>  * query2
>  select count\(*) from test.test_table where rk2 = 'aa';
> {code}
> +---+-++
> |  PLAN   
>   | EST_BYTES_READ  | EST_ROWS_READ  |
> +---+-++
> | CLIENT 1846-CHUNK 4992196444 ROWS 1939177965768 BYTES PARALLEL 11-WAY RANGE 
> SCAN OVER TEST:TEST_TABLE [0] - [15]  | 1939177965768   | 4992196444 |
> | SERVER FILTER BY FIRST KEY ONLY AND RK2 = 'aa'  
>   | 1939177965768   | 4992196444 |
> | SERVER AGGREGATE INTO SINGLE ROW
>   | 1939177965768   | 4992196444 |
> +---+-++
> {code}
> Since rk2 used in the where clause of query2 is the second column of the PK, 
> it must be a full scan query like query1.
> However, as you can see, query2 is created by range scan and the generated 
> chunk is also less than five compared to query1.
> I added the log and printed out the startkey and endkey of the scan object 
> generated by the plan.
> And I found 5 chunks missing by query2.
> All five missing chunks were found in regions where the originally generated 
> region boundary value was not maintained through the merge operation.
> !initial_salting_region.png!
> After merging regions
> !merged-region.png!
> The code that caused the problem is this part.
>  When a select query is executed, the 
> [org.apache.phoenix.iterate.BaseResultIterators#getParallelScans|https://github.com/apache/phoenix/blob/v4.11.0-HBase-1.2/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java#L743-L744]
>  method creates a Scan object based on the GuidePost in the statistics table. 

[jira] [Assigned] (PHOENIX-6052) GLOBAL_MUTATION_COMMIT_TIME metric doesn't include the time spent in syscat rpc's

2022-12-05 Thread Aman Poonia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia reassigned PHOENIX-6052:


Assignee: Aman Poonia

> GLOBAL_MUTATION_COMMIT_TIME metric doesn't include the time spent in syscat 
> rpc's
> -
>
> Key: PHOENIX-6052
> URL: https://issues.apache.org/jira/browse/PHOENIX-6052
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.14.3
>Reporter: Rushabh Shah
>Assignee: Aman Poonia
>Priority: Major
>
> Currently we measure the metric GLOBAL_MUTATION_COMMIT_TIME as the time spent 
> just in htable.batch rpc for base and index tables. 
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L1029-L1136
> We don't measure the time spent in 
> MutationState#validateAndGetServerTimestamp which makes rpc to SYSTEM.CATALOG 
> table and which is a part of commit phase.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-5980) MUTATION_BATCH_FAILED_SIZE metric is incorrectly updated for failing delete mutations

2022-11-25 Thread Aman Poonia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia reassigned PHOENIX-5980:


Assignee: Aman Poonia

> MUTATION_BATCH_FAILED_SIZE metric is incorrectly updated for failing delete 
> mutations
> -
>
> Key: PHOENIX-5980
> URL: https://issues.apache.org/jira/browse/PHOENIX-5980
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Aman Poonia
>Priority: Major
>  Labels: metrics, phoenix-hardening, quality-improvement
> Fix For: 4.17.0, 4.16.2
>
>
> In the conn.commit() path, we get the number of mutations that failed to be 
> committed in the catch block of MutationState.sendMutations() (see 
> [here|https://github.com/apache/phoenix/blob/dcc88af8acc2ba8df10d2e9d498ab3646fdf0a78/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L1195-L1198]).
>  
> In case of delete mutations, the uncommittedStatementIndexes.length always 
> resolves to 1 and we always update the metric value by 1 in this case, even 
> though the actual mutation list corresponds to multiple DELETE mutations 
> which failed. In case of upserts, using unCommittedStatementIndexes.length is 
> fine since each upsert query corresponds to 1 Put. We should fix the logic 
> for deletes/mixed delete + upsert mutation batch failures.
> This wrong value is propagated to global client metrics as well as 
> MutationMetricQueue metrics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6828) Test failure in master branch : LogicalTableNameIT.testUpdatePhysicalIndexTableName_runScrutiny

2022-11-22 Thread Aman Poonia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia resolved PHOENIX-6828.
--
Resolution: Not A Problem

> Test failure in master branch : 
> LogicalTableNameIT.testUpdatePhysicalIndexTableName_runScrutiny
> ---
>
> Key: PHOENIX-6828
> URL: https://issues.apache.org/jira/browse/PHOENIX-6828
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Rushabh Shah
>Assignee: Aman Poonia
>Priority: Major
>
> The following tests are failing in master branch
> [ERROR] Failures: 
> [ERROR]   LogicalTableNameIT.testUpdatePhysicalIndexTableName_runScrutiny:229 
> expected:<2> but was:<1>
> [ERROR]   LogicalTableNameIT.testUpdatePhysicalIndexTableName_runScrutiny:229 
> expected:<2> but was:<1>
> [ERROR]   LogicalTableNameIT.testUpdatePhysicalIndexTableName_runScrutiny:229 
> expected:<2> but was:<1>
> [ERROR]   LogicalTableNameIT.testUpdatePhysicalIndexTableName_runScrutiny:229 
> expected:<2> but was:<1>
> [ERROR]   
> LogicalTableNameIT.testUpdatePhysicalTableNameWithIndex_runScrutiny:169 
> expected:<2> but was:<1>
> [ERROR]   
> LogicalTableNameIT.testUpdatePhysicalTableNameWithIndex_runScrutiny:169 
> expected:<2> but was:<1>
> [ERROR]   
> LogicalTableNameIT.testUpdatePhysicalTableNameWithIndex_runScrutiny:165 
> expected:<3> but was:<1>
> [ERROR]   
> LogicalTableNameIT.testUpdatePhysicalTableNameWithIndex_runScrutiny:165 
> expected:<3> but was:<1>
> [ERROR]   
> LogicalTableNameIT.testUpdatePhysicalTableNameWithViews_runScrutiny:353 
> expected:<2> but was:<0>
> [ERROR]   
> LogicalTableNameIT.testUpdatePhysicalTableNameWithViews_runScrutiny:353 
> expected:<2> but was:<0>
> Failed in 2 different PR builds and confirmed locally on master that it is 
> failing.
> 1. 
> https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1518/4/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt
> 2. 
> https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1522/1/testReport/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6813) Refactor iterators code to make them more readable

2022-10-18 Thread Aman Poonia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-6813:
-
Summary: Refactor iterators code to make them more readable  (was: Refactor 
iterators code to make them little more readable)

> Refactor iterators code to make them more readable
> --
>
> Key: PHOENIX-6813
> URL: https://issues.apache.org/jira/browse/PHOENIX-6813
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6813) Refactor iterators code to make them little more readable

2022-10-18 Thread Aman Poonia (Jira)
Aman Poonia created PHOENIX-6813:


 Summary: Refactor iterators code to make them little more readable
 Key: PHOENIX-6813
 URL: https://issues.apache.org/jira/browse/PHOENIX-6813
 Project: Phoenix
  Issue Type: Improvement
Reporter: Aman Poonia
Assignee: Aman Poonia






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6672) Move phoenix website from svn to git

2022-03-22 Thread Aman Poonia (Jira)
Aman Poonia created PHOENIX-6672:


 Summary: Move phoenix website from svn to git
 Key: PHOENIX-6672
 URL: https://issues.apache.org/jira/browse/PHOENIX-6672
 Project: Phoenix
  Issue Type: Improvement
Reporter: Aman Poonia


Currently we have our website hosted from svn. It is good to move it to git to 
let other developers create PR as they do to any JIRA. This will help us in 
improving the workflow to contribute to phoenix documentation



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (PHOENIX-6616) Alter table command can be used to set normalization_enabled=true on salted tables

2021-12-21 Thread Aman Poonia (Jira)
Aman Poonia created PHOENIX-6616:


 Summary: Alter table command can be used to set 
normalization_enabled=true on salted tables
 Key: PHOENIX-6616
 URL: https://issues.apache.org/jira/browse/PHOENIX-6616
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.2, 4.16.1
Reporter: Aman Poonia


Here is what i found

CREATE TABLE IF NOT EXISTS table1(a BIGINT NOT NULL, b BIGINT NOT NULL  
CONSTRAINT PK PRIMARY KEY (a, b)) TTL=7776000, NORMALIZATION_ENABLED=true, 
SALT_BUCKETS=16, DISABLE_TABLE_SOR=true, NORMALIZER_TARGET_REGION_SIZE=5200;


Error: ERROR 1147 (42Y86): Should not enable normalizer on salted table. 
tableName=TABLE1 (state=42Y86,code=1147)
java.sql.SQLException: ERROR 1147 (42Y86): Should not enable normalizer on 
salted table. tableName=TABLE1
    at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:618)
    at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:228)
    at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1462)
    at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1983)
    at 
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:3094)
    at 
org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1118)
    at 
org.apache.phoenix.compile.CreateTableCompiler$CreateTableMutationPlan.execute(CreateTableCompiler.java:421)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:516)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:482)
    at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:481)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:469)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2036)
    at sqlline.Commands.execute(Commands.java:814)
    at sqlline.Commands.sql(Commands.java:754)

 

 

0: jdbc:phoenix:localhost:50141> CREATE TABLE IF NOT EXISTS table1(a BIGINT NOT 
NULL, b BIGINT NOT NULL  CONSTRAINT PK PRIMARY KEY (a, b)) TTL=7776000, 
NORMALIZATION_ENABLED=false, SALT_BUCKETS=16, DISABLE_TABLE_SOR=true, 
NORMALIZER_TARGET_REGION_SIZE=5200;
No rows affected (2.295 seconds)


0: jdbc:phoenix:localhost:50141> ALTER TABLE table1 set 
NORMALIZATION_ENABLED=true;
No rows affected (1.374 seconds)


 So basically we are still able to set the normalization if we go through the 
alter table. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-5493) Remove unnecesary iteration in BaseResultIterator

2019-09-25 Thread Aman Poonia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5493:
-
Attachment: PHOENIX-5493.patch

> Remove unnecesary iteration in BaseResultIterator 
> --
>
> Key: PHOENIX-5493
> URL: https://issues.apache.org/jira/browse/PHOENIX-5493
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.3
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Minor
> Attachments: PHOENIX-5493.patch
>
>
> In BaseResultIterator
> {code:java}
> while (offset < nColumnsInCommon && offset < rangesListSize) {
> List ranges = rangesList.get(offset);
> // We use a skip scan if we have multiple ranges or if
> // we have a non single key range before the last range.
> useSkipScan |= ranges.size() > 1 || hasRange;
> cnf.add(ranges);
> int rangeSpan = 1 + dataScanRanges.getSlotSpans()[offset];
> if (offset + rangeSpan > nColumnsInCommon) {
> rangeSpan = nColumnsInCommon - offset;
> // trim range to only be rangeSpan in length
> ranges = 
> Lists.newArrayListWithExpectedSize(cnf.get(cnf.size()-1).size());
> for (KeyRange range : cnf.get(cnf.size()-1)) {
> range = clipRange(dataScanRanges.getSchema(), offset, 
> rangeSpan, range);
> // trim range to be only rangeSpan in length
> ranges.add(range);
> }
> cnf.set(cnf.size()-1, ranges);
> }
> for (KeyRange range : ranges) {
> if (!range.isSingleKey()) {
> hasRange = true;
> }
> }
> slotSpan[offset] = rangeSpan - 1;
> offset = offset + rangeSpan;
> }
> {code}
>  we can break in the inner loop and save some cycles of CPU



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5493) Remove unnecesary iteration in BaseResultIterator

2019-09-25 Thread Aman Poonia (Jira)
Aman Poonia created PHOENIX-5493:


 Summary: Remove unnecesary iteration in BaseResultIterator 
 Key: PHOENIX-5493
 URL: https://issues.apache.org/jira/browse/PHOENIX-5493
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.14.3
Reporter: Aman Poonia
Assignee: Aman Poonia


In BaseResultIterator
{code:java}
while (offset < nColumnsInCommon && offset < rangesListSize) {
List ranges = rangesList.get(offset);
// We use a skip scan if we have multiple ranges or if
// we have a non single key range before the last range.
useSkipScan |= ranges.size() > 1 || hasRange;
cnf.add(ranges);
int rangeSpan = 1 + dataScanRanges.getSlotSpans()[offset];
if (offset + rangeSpan > nColumnsInCommon) {
rangeSpan = nColumnsInCommon - offset;
// trim range to only be rangeSpan in length
ranges = 
Lists.newArrayListWithExpectedSize(cnf.get(cnf.size()-1).size());
for (KeyRange range : cnf.get(cnf.size()-1)) {
range = clipRange(dataScanRanges.getSchema(), offset, 
rangeSpan, range);
// trim range to be only rangeSpan in length
ranges.add(range);
}
cnf.set(cnf.size()-1, ranges);
}
for (KeyRange range : ranges) {
if (!range.isSingleKey()) {
hasRange = true;
}
}
slotSpan[offset] = rangeSpan - 1;
offset = offset + rangeSpan;
}
{code}
 we can break in the inner loop and save some cycles of CPU



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5299) puppycrawl checkstyle dtds moved to sourceforge

2019-05-24 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5299:
-
Attachment: 5299.patch

> puppycrawl checkstyle dtds moved to sourceforge
> ---
>
> Key: PHOENIX-5299
> URL: https://issues.apache.org/jira/browse/PHOENIX-5299
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Attachments: 5299.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> puppycrawl checkstyle dtds moved to sourceforge
> new urls are
>  
> [https://checkstyle.org/dtds/configuration_1_1.dtd]
> [https://checkstyle.org/dtds/suppressions_1_1.dtd]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5299) puppycrawl checkstyle dtds moved to sourceforge

2019-05-24 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5299:
-
Description: 
puppycrawl checkstyle dtds moved to sourceforge

new urls are

 

[https://checkstyle.org/dtds/configuration_1_1.dtd]

[https://checkstyle.org/dtds/suppressions_1_1.dtd]

 

> puppycrawl checkstyle dtds moved to sourceforge
> ---
>
> Key: PHOENIX-5299
> URL: https://issues.apache.org/jira/browse/PHOENIX-5299
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
>
> puppycrawl checkstyle dtds moved to sourceforge
> new urls are
>  
> [https://checkstyle.org/dtds/configuration_1_1.dtd]
> [https://checkstyle.org/dtds/suppressions_1_1.dtd]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5299) puppycrawl checkstyle dtds moved to sourceforge

2019-05-24 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5299:
-
Environment: (was: puppycrawl checkstyle dtds moved to sourceforge

new urls are

 

[https://checkstyle.org/dtds/configuration_1_1.dtd]

[https://checkstyle.org/dtds/suppressions_1_1.dtd]

 )

> puppycrawl checkstyle dtds moved to sourceforge
> ---
>
> Key: PHOENIX-5299
> URL: https://issues.apache.org/jira/browse/PHOENIX-5299
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5299) puppycrawl checkstyle dtds moved to sourceforge

2019-05-24 Thread Aman Poonia (JIRA)
Aman Poonia created PHOENIX-5299:


 Summary: puppycrawl checkstyle dtds moved to sourceforge
 Key: PHOENIX-5299
 URL: https://issues.apache.org/jira/browse/PHOENIX-5299
 Project: Phoenix
  Issue Type: Bug
 Environment: puppycrawl checkstyle dtds moved to sourceforge

new urls are

 

[https://checkstyle.org/dtds/configuration_1_1.dtd]

[https://checkstyle.org/dtds/suppressions_1_1.dtd]

 
Reporter: Aman Poonia
Assignee: Aman Poonia






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5187) Avoid using FileInputStream and FileOutputStream

2019-03-11 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5187:
-
Attachment: PHOENIX-5187-4.x-HBase-1.3.patch

> Avoid using FileInputStream and FileOutputStream 
> -
>
> Key: PHOENIX-5187
> URL: https://issues.apache.org/jira/browse/PHOENIX-5187
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Attachments: PHOENIX-5187-4.x-HBase-1.3.patch
>
>
> Avoid using FileInputStream and FileOutputStream because of
> [https://bugs.openjdk.java.net/browse/JDK-8080225]
> This has been resolved in jdk10
> A quick workaround is to use File.newInputStream and Files.newOutputStream



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5187) Avoid using FileInputStream and FileOutputStream

2019-03-11 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5187:
-
Description: 
Avoid using FileInputStream and FileOutputStream because of

[https://bugs.openjdk.java.net/browse/JDK-8080225]

The file objects doesnot get cleaned up even if we close it until full GC 
happens

This has been resolved in jdk10

A quick workaround is to use File.newInputStream and Files.newOutputStream

  was:
Avoid using FileInputStream and FileOutputStream because of

[https://bugs.openjdk.java.net/browse/JDK-8080225]

The file objects doesnot get cleaned up even if we close it unless full GC 
happens

This has been resolved in jdk10

A quick workaround is to use File.newInputStream and Files.newOutputStream


> Avoid using FileInputStream and FileOutputStream 
> -
>
> Key: PHOENIX-5187
> URL: https://issues.apache.org/jira/browse/PHOENIX-5187
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Attachments: PHOENIX-5187-4.x-HBase-1.3.patch
>
>
> Avoid using FileInputStream and FileOutputStream because of
> [https://bugs.openjdk.java.net/browse/JDK-8080225]
> The file objects doesnot get cleaned up even if we close it until full GC 
> happens
> This has been resolved in jdk10
> A quick workaround is to use File.newInputStream and Files.newOutputStream



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5187) Avoid using FileInputStream and FileOutputStream

2019-03-11 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5187:
-
Description: 
Avoid using FileInputStream and FileOutputStream because of

[https://bugs.openjdk.java.net/browse/JDK-8080225]

The file objects doesnot get cleaned up even if we close it unless full GC 
happens

This has been resolved in jdk10

A quick workaround is to use File.newInputStream and Files.newOutputStream

  was:
Avoid using FileInputStream and FileOutputStream because of

[https://bugs.openjdk.java.net/browse/JDK-8080225]

This has been resolved in jdk10

A quick workaround is to use File.newInputStream and Files.newOutputStream


> Avoid using FileInputStream and FileOutputStream 
> -
>
> Key: PHOENIX-5187
> URL: https://issues.apache.org/jira/browse/PHOENIX-5187
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Attachments: PHOENIX-5187-4.x-HBase-1.3.patch
>
>
> Avoid using FileInputStream and FileOutputStream because of
> [https://bugs.openjdk.java.net/browse/JDK-8080225]
> The file objects doesnot get cleaned up even if we close it unless full GC 
> happens
> This has been resolved in jdk10
> A quick workaround is to use File.newInputStream and Files.newOutputStream



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5187) Avoid using FileInputStream and FileOutputStream

2019-03-11 Thread Aman Poonia (JIRA)
Aman Poonia created PHOENIX-5187:


 Summary: Avoid using FileInputStream and FileOutputStream 
 Key: PHOENIX-5187
 URL: https://issues.apache.org/jira/browse/PHOENIX-5187
 Project: Phoenix
  Issue Type: Improvement
Reporter: Aman Poonia
Assignee: Aman Poonia


Avoid using FileInputStream and FileOutputStream because of

[https://bugs.openjdk.java.net/browse/JDK-8080225]

This has been resolved in jdk10

A quick workaround is to use File.newInputStream and Files.newOutputStream



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5186) Remove redundant check for local in metadata client

2019-03-11 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5186:
-
Attachment: PHOENIX-5186.4.x-HBase-1.3.patch

> Remove redundant check for local in metadata client
> ---
>
> Key: PHOENIX-5186
> URL: https://issues.apache.org/jira/browse/PHOENIX-5186
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Minor
> Attachments: PHOENIX-5186.4.x-HBase-1.3.patch
>
>
> Remove redundant check for local index type in metadata client
> {code:java}
> if (index.getIndexType() != IndexType.LOCAL) {
> if (index.getIndexType() != IndexType.LOCAL) {
> if (table.getType() != PTableType.VIEW) {
> rowCount += updateStatisticsInternal(index.getPhysicalName(), 
> index,
> updateStatisticsStmt.getProps(), true);
> } else {
> rowCount += updateStatisticsInternal(table.getPhysicalName(), 
> index,
> updateStatisticsStmt.getProps(), true);
> }
> }
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5186) Remove redundant check for local in metadata client

2019-03-11 Thread Aman Poonia (JIRA)
Aman Poonia created PHOENIX-5186:


 Summary: Remove redundant check for local in metadata client
 Key: PHOENIX-5186
 URL: https://issues.apache.org/jira/browse/PHOENIX-5186
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.14.1
Reporter: Aman Poonia
Assignee: Aman Poonia


Remove redundant check for local index type in metadata client
{code:java}
if (index.getIndexType() != IndexType.LOCAL) {
if (index.getIndexType() != IndexType.LOCAL) {
if (table.getType() != PTableType.VIEW) {
rowCount += updateStatisticsInternal(index.getPhysicalName(), index,
updateStatisticsStmt.getProps(), true);
} else {
rowCount += updateStatisticsInternal(table.getPhysicalName(), index,
updateStatisticsStmt.getProps(), true);
}
}
}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5121) Move unnecessary sorting and fetching out of loop

2019-02-04 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5121:
-
Attachment: PHOENIX-5121.patch

> Move unnecessary sorting and fetching out of loop
> -
>
> Key: PHOENIX-5121
> URL: https://issues.apache.org/jira/browse/PHOENIX-5121
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Minor
> Attachments: PHOENIX-5121.patch
>
>
> Don't fetch and sort PK columns of a table inside loop in 
> PhoenixDatabaseMetaData#getPrimaryKeys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5121) Move unnecessary sorting and fetching out of loop

2019-02-04 Thread Aman Poonia (JIRA)
Aman Poonia created PHOENIX-5121:


 Summary: Move unnecessary sorting and fetching out of loop
 Key: PHOENIX-5121
 URL: https://issues.apache.org/jira/browse/PHOENIX-5121
 Project: Phoenix
  Issue Type: Improvement
Reporter: Aman Poonia
Assignee: Aman Poonia


Don't fetch and sort PK columns of a table inside loop in 
PhoenixDatabaseMetaData#getPrimaryKeys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5042) Improve the exception message for local index to make it

2018-11-23 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5042:
-
Attachment: PHOENIX-5042.patch

> Improve the exception message for local index to make it 
> -
>
> Key: PHOENIX-5042
> URL: https://issues.apache.org/jira/browse/PHOENIX-5042
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Aman Poonia
>Priority: Minor
> Attachments: PHOENIX-5042.patch
>
>
> In one of the test we found that LOCAL INDEX Table doesn't support 
> HTableDescriptor as option in create statement. For instance if we do
> {code:java}
> CREATE local INDEX IDX_AAA_ORDERS ON AAA_ORDERS (o_custkey) INCLUDE 
> (o_orderkey, o_totalprice) DEFERRED_LOG_FLUSH=true;{code}
> It throws an error message saying 
> {code:java}
> Error: ERROR 1009 (42L02): Properties may not be defined for a view. 
> (state=42L02,code=1009){code}
> This message doesn't tell how it is related to my local index!!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5042) Improve the exception message for local index to make it easy for user to understand

2018-11-23 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5042:
-
Summary: Improve the exception message for local index to make it easy for 
user to understand   (was: Improve the exception message for local index to 
make it )

> Improve the exception message for local index to make it easy for user to 
> understand 
> -
>
> Key: PHOENIX-5042
> URL: https://issues.apache.org/jira/browse/PHOENIX-5042
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Minor
> Attachments: PHOENIX-5042.patch
>
>
> In one of the test we found that LOCAL INDEX Table doesn't support 
> HTableDescriptor as option in create statement. For instance if we do
> {code:java}
> CREATE local INDEX IDX_AAA_ORDERS ON AAA_ORDERS (o_custkey) INCLUDE 
> (o_orderkey, o_totalprice) DEFERRED_LOG_FLUSH=true;{code}
> It throws an error message saying 
> {code:java}
> Error: ERROR 1009 (42L02): Properties may not be defined for a view. 
> (state=42L02,code=1009){code}
> This message doesn't tell how it is related to my local index!!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5042) Improve the exception message for local index to make it

2018-11-23 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia reassigned PHOENIX-5042:


Assignee: Aman Poonia

> Improve the exception message for local index to make it 
> -
>
> Key: PHOENIX-5042
> URL: https://issues.apache.org/jira/browse/PHOENIX-5042
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Minor
> Attachments: PHOENIX-5042.patch
>
>
> In one of the test we found that LOCAL INDEX Table doesn't support 
> HTableDescriptor as option in create statement. For instance if we do
> {code:java}
> CREATE local INDEX IDX_AAA_ORDERS ON AAA_ORDERS (o_custkey) INCLUDE 
> (o_orderkey, o_totalprice) DEFERRED_LOG_FLUSH=true;{code}
> It throws an error message saying 
> {code:java}
> Error: ERROR 1009 (42L02): Properties may not be defined for a view. 
> (state=42L02,code=1009){code}
> This message doesn't tell how it is related to my local index!!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5042) Improve the exception message for local index to make it

2018-11-23 Thread Aman Poonia (JIRA)
Aman Poonia created PHOENIX-5042:


 Summary: Improve the exception message for local index to make it 
 Key: PHOENIX-5042
 URL: https://issues.apache.org/jira/browse/PHOENIX-5042
 Project: Phoenix
  Issue Type: Improvement
Reporter: Aman Poonia


In one of the test we found that LOCAL INDEX Table doesn't support 
HTableDescriptor as option in create statement. For instance if we do
{code:java}
CREATE local INDEX IDX_AAA_ORDERS ON AAA_ORDERS (o_custkey) INCLUDE 
(o_orderkey, o_totalprice) DEFERRED_LOG_FLUSH=true;{code}
It throws an error message saying 
{code:java}
Error: ERROR 1009 (42L02): Properties may not be defined for a view. 
(state=42L02,code=1009){code}
This message doesn't tell how it is related to my local index!!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4928) Local index is always in building state

2018-11-01 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia resolved PHOENIX-4928.
--
Resolution: Invalid

> Local index is always in building state
> ---
>
> Key: PHOENIX-4928
> URL: https://issues.apache.org/jira/browse/PHOENIX-4928
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: Phoenix 4.14
> HBase 1.3
>  
>Reporter: Aman Poonia
>Priority: Major
>  Labels: LocalIndex
>
> In some of our testing we found that local index is always in building state 
> even when no writes are happening on table. This is misleading as someone 
> might consider it has some operations running on the index.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4989) Include disruptor jar in shaded dependency

2018-10-31 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4989:
-
Description: 
Include disruptor jar in shaded dependency as hbase has a different version of 
the same.

As a result we are not able to run any MR job like IndexScrutinity or IndexTool 
using phoenix on hbase 1.3 onwards cluster

  was:Include disruptor jar in shaded dependency as hbase has a different 
version of the same


> Include disruptor jar in shaded dependency
> --
>
> Key: PHOENIX-4989
> URL: https://issues.apache.org/jira/browse/PHOENIX-4989
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-4989-4.x-HBase-1.3.patch
>
>
> Include disruptor jar in shaded dependency as hbase has a different version 
> of the same.
> As a result we are not able to run any MR job like IndexScrutinity or 
> IndexTool using phoenix on hbase 1.3 onwards cluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4989) Include disruptor jar in shaded dependency

2018-10-30 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4989:
-
Fix Version/s: 4.14.1

> Include disruptor jar in shaded dependency
> --
>
> Key: PHOENIX-4989
> URL: https://issues.apache.org/jira/browse/PHOENIX-4989
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Fix For: 4.14.1
>
> Attachments: PHOENIX-4989-4.x-HBase-1.3.patch
>
>
> Include disruptor jar in shaded dependency as hbase has a different version 
> of the same



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4989) Include disruptor jar in shaded dependency

2018-10-23 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4989:
-
Attachment: PHOENIX-4989-4.x-HBase-1.3.patch

> Include disruptor jar in shaded dependency
> --
>
> Key: PHOENIX-4989
> URL: https://issues.apache.org/jira/browse/PHOENIX-4989
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Attachments: PHOENIX-4989-4.x-HBase-1.3.patch
>
>
> Include disruptor jar in shaded dependency as hbase has a different version 
> of the same



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4989) Include disruptor jar in shaded dependency

2018-10-23 Thread Aman Poonia (JIRA)
Aman Poonia created PHOENIX-4989:


 Summary: Include disruptor jar in shaded dependency
 Key: PHOENIX-4989
 URL: https://issues.apache.org/jira/browse/PHOENIX-4989
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Aman Poonia
Assignee: Aman Poonia


Include disruptor jar in shaded dependency as hbase has a different version of 
the same



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4928) Local index is always in building state

2018-09-26 Thread Aman Poonia (JIRA)
Aman Poonia created PHOENIX-4928:


 Summary: Local index is always in building state
 Key: PHOENIX-4928
 URL: https://issues.apache.org/jira/browse/PHOENIX-4928
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
 Environment: Phoenix 4.14

HBase 1.3

 
Reporter: Aman Poonia


In some of our testing we found that local index is always in building state 
even when no writes are happening on table. This is misleading as someone might 
consider it has some operations running on the index.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4839) IndexHalfStoreFileReaderGenerator throws NullPointerException

2018-09-08 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4839:
-
Description: 
{noformat}
018-08-08 09:15:25,075 FATAL [7,queue=3,port=60020] regionserver.HRegionServer 
- ABORTING region server phoenix1,60020,1533715370645: The coprocessor 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
java.lang.NullPointerException
 java.lang.NullPointerException
 at java.util.ArrayList.addAll(ArrayList.java:577)
 at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.getLocalIndexScanners(IndexHalfStoreFileReaderGenerator.java:398)
 at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.access$000(IndexHalfStoreFileReaderGenerator.java:73)
 at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.getScannersNoCompaction(IndexHalfStoreFileReaderGenerator.java:332)
 at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214)
 at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:327)
 at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:326)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1335)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1734)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1330)
 at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2169)
 at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5916)
 at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5890)
 at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2739)
 at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2719)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7197)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7156)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7149)
 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2249)
 at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35068)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2373)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168
 {noformat}

Possible fix  for this would be to backport PHOENIX-4440 and PHOENIX-4318

  was:
{noformat}
018-08-08 09:15:25,075 FATAL [7,queue=3,port=60020] regionserver.HRegionServer 
- ABORTING region server phoenix1,60020,1533715370645: The coprocessor 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
java.lang.NullPointerException
 java.lang.NullPointerException
 at java.util.ArrayList.addAll(ArrayList.java:577)
 at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.getLocalIndexScanners(IndexHalfStoreFileReaderGenerator.java:398)
 at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.access$000(IndexHalfStoreFileReaderGenerator.java:73)
 at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.getScannersNoCompaction(IndexHalfStoreFileReaderGenerator.java:332)
 at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214)
 at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:327)
 at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:326)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1335)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1734)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1330)
 at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2169)
 at 

[jira] [Updated] (PHOENIX-4839) IndexHalfStoreFileReaderGenerator throws NullPointerException

2018-09-05 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4839:
-
Attachment: PHOENIX-4839-4.x-HBase-1.4.addendum.01.patch

> IndexHalfStoreFileReaderGenerator throws NullPointerException
> -
>
> Key: PHOENIX-4839
> URL: https://issues.apache.org/jira/browse/PHOENIX-4839
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4839-4.x-HBase-1.3.01.patch, 
> PHOENIX-4839-4.x-HBase-1.3.addendum.01.patch, 
> PHOENIX-4839-4.x-HBase-1.3.addendum.patch, PHOENIX-4839-4.x-HBase-1.3.patch, 
> PHOENIX-4839-4.x-HBase-1.4.addendum.01.patch, 
> PHOENIX-4839-4.x-HBase-1.4.addendum.patch, PHOENIX-4839-HBase-1.3.patch, 
> PHOENIX-4839.patch
>
>
> {noformat}
> 018-08-08 09:15:25,075 FATAL [7,queue=3,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> phoenix1,60020,1533715370645: The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NullPointerException
>  java.lang.NullPointerException
>  at java.util.ArrayList.addAll(ArrayList.java:577)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.getLocalIndexScanners(IndexHalfStoreFileReaderGenerator.java:398)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.access$000(IndexHalfStoreFileReaderGenerator.java:73)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.getScannersNoCompaction(IndexHalfStoreFileReaderGenerator.java:332)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:327)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:326)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1335)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1734)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1330)
>  at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2169)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5916)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5890)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2739)
>  at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2719)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7197)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7156)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7149)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2249)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35068)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2373)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4839) IndexHalfStoreFileReaderGenerator throws NullPointerException

2018-09-05 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4839:
-
Attachment: PHOENIX-4839-4.x-HBase-1.3.addendum.01.patch

> IndexHalfStoreFileReaderGenerator throws NullPointerException
> -
>
> Key: PHOENIX-4839
> URL: https://issues.apache.org/jira/browse/PHOENIX-4839
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4839-4.x-HBase-1.3.01.patch, 
> PHOENIX-4839-4.x-HBase-1.3.addendum.01.patch, 
> PHOENIX-4839-4.x-HBase-1.3.addendum.patch, PHOENIX-4839-4.x-HBase-1.3.patch, 
> PHOENIX-4839-4.x-HBase-1.4.addendum.patch, PHOENIX-4839-HBase-1.3.patch, 
> PHOENIX-4839.patch
>
>
> {noformat}
> 018-08-08 09:15:25,075 FATAL [7,queue=3,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> phoenix1,60020,1533715370645: The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NullPointerException
>  java.lang.NullPointerException
>  at java.util.ArrayList.addAll(ArrayList.java:577)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.getLocalIndexScanners(IndexHalfStoreFileReaderGenerator.java:398)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.access$000(IndexHalfStoreFileReaderGenerator.java:73)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.getScannersNoCompaction(IndexHalfStoreFileReaderGenerator.java:332)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:327)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:326)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1335)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1734)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1330)
>  at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2169)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5916)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5890)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2739)
>  at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2719)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7197)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7156)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7149)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2249)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35068)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2373)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4839) IndexHalfStoreFileReaderGenerator throws NullPointerException

2018-09-01 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4839:
-
Attachment: PHOENIX-4839-4.x-HBase-1.3.addendum.patch

> IndexHalfStoreFileReaderGenerator throws NullPointerException
> -
>
> Key: PHOENIX-4839
> URL: https://issues.apache.org/jira/browse/PHOENIX-4839
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4839-4.x-HBase-1.3.01.patch, 
> PHOENIX-4839-4.x-HBase-1.3.addendum.patch, PHOENIX-4839-4.x-HBase-1.3.patch, 
> PHOENIX-4839-4.x-HBase-1.4.addendum.patch, PHOENIX-4839-HBase-1.3.patch, 
> PHOENIX-4839.patch
>
>
> {noformat}
> 018-08-08 09:15:25,075 FATAL [7,queue=3,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> phoenix1,60020,1533715370645: The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NullPointerException
>  java.lang.NullPointerException
>  at java.util.ArrayList.addAll(ArrayList.java:577)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.getLocalIndexScanners(IndexHalfStoreFileReaderGenerator.java:398)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.access$000(IndexHalfStoreFileReaderGenerator.java:73)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.getScannersNoCompaction(IndexHalfStoreFileReaderGenerator.java:332)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:327)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:326)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1335)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1734)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1330)
>  at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2169)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5916)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5890)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2739)
>  at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2719)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7197)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7156)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7149)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2249)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35068)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2373)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4839) IndexHalfStoreFileReaderGenerator throws NullPointerException

2018-08-30 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4839:
-
Attachment: PHOENIX-4839-4.x-HBase-1.4.addendum.patch

> IndexHalfStoreFileReaderGenerator throws NullPointerException
> -
>
> Key: PHOENIX-4839
> URL: https://issues.apache.org/jira/browse/PHOENIX-4839
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4839-4.x-HBase-1.3.01.patch, 
> PHOENIX-4839-4.x-HBase-1.3.patch, PHOENIX-4839-4.x-HBase-1.4.addendum.patch, 
> PHOENIX-4839-HBase-1.3.patch, PHOENIX-4839.patch
>
>
> {noformat}
> 018-08-08 09:15:25,075 FATAL [7,queue=3,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> phoenix1,60020,1533715370645: The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NullPointerException
>  java.lang.NullPointerException
>  at java.util.ArrayList.addAll(ArrayList.java:577)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.getLocalIndexScanners(IndexHalfStoreFileReaderGenerator.java:398)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.access$000(IndexHalfStoreFileReaderGenerator.java:73)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.getScannersNoCompaction(IndexHalfStoreFileReaderGenerator.java:332)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:327)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:326)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1335)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1734)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1330)
>  at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2169)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5916)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5890)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2739)
>  at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2719)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7197)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7156)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7149)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2249)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35068)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2373)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4839) IndexHalfStoreFileReaderGenerator throws NullPointerException

2018-08-29 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4839:
-
Attachment: PHOENIX-4839-4.x-HBase-1.3.01.patch

> IndexHalfStoreFileReaderGenerator throws NullPointerException
> -
>
> Key: PHOENIX-4839
> URL: https://issues.apache.org/jira/browse/PHOENIX-4839
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4839-4.x-HBase-1.3.01.patch, 
> PHOENIX-4839-4.x-HBase-1.3.patch, PHOENIX-4839-HBase-1.3.patch, 
> PHOENIX-4839.patch
>
>
> {noformat}
> 018-08-08 09:15:25,075 FATAL [7,queue=3,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> phoenix1,60020,1533715370645: The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NullPointerException
>  java.lang.NullPointerException
>  at java.util.ArrayList.addAll(ArrayList.java:577)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.getLocalIndexScanners(IndexHalfStoreFileReaderGenerator.java:398)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.access$000(IndexHalfStoreFileReaderGenerator.java:73)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.getScannersNoCompaction(IndexHalfStoreFileReaderGenerator.java:332)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:327)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:326)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1335)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1734)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1330)
>  at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2169)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5916)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5890)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2739)
>  at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2719)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7197)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7156)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7149)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2249)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35068)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2373)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4839) IndexHalfStoreFileReaderGenerator throws NullPointerException

2018-08-21 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4839:
-
Attachment: PHOENIX-4839-4.x-HBase-1.3.patch

> IndexHalfStoreFileReaderGenerator throws NullPointerException
> -
>
> Key: PHOENIX-4839
> URL: https://issues.apache.org/jira/browse/PHOENIX-4839
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Attachments: PHOENIX-4839-4.x-HBase-1.3.patch, 
> PHOENIX-4839-HBase-1.3.patch, PHOENIX-4839.patch
>
>
> {noformat}
> 018-08-08 09:15:25,075 FATAL [7,queue=3,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> phoenix1,60020,1533715370645: The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NullPointerException
>  java.lang.NullPointerException
>  at java.util.ArrayList.addAll(ArrayList.java:577)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.getLocalIndexScanners(IndexHalfStoreFileReaderGenerator.java:398)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.access$000(IndexHalfStoreFileReaderGenerator.java:73)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.getScannersNoCompaction(IndexHalfStoreFileReaderGenerator.java:332)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:327)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:326)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1335)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1734)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1330)
>  at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2169)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5916)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5890)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2739)
>  at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2719)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7197)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7156)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7149)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2249)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35068)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2373)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4839) IndexHalfStoreFileReaderGenerator throws NullPointerException

2018-08-20 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4839:
-
Attachment: PHOENIX-4839-HBase-1.3.patch

> IndexHalfStoreFileReaderGenerator throws NullPointerException
> -
>
> Key: PHOENIX-4839
> URL: https://issues.apache.org/jira/browse/PHOENIX-4839
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Attachments: PHOENIX-4839-HBase-1.3.patch, PHOENIX-4839.patch
>
>
> {noformat}
> 018-08-08 09:15:25,075 FATAL [7,queue=3,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> phoenix1,60020,1533715370645: The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NullPointerException
>  java.lang.NullPointerException
>  at java.util.ArrayList.addAll(ArrayList.java:577)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.getLocalIndexScanners(IndexHalfStoreFileReaderGenerator.java:398)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.access$000(IndexHalfStoreFileReaderGenerator.java:73)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.getScannersNoCompaction(IndexHalfStoreFileReaderGenerator.java:332)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:327)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:326)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1335)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1734)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1330)
>  at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2169)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5916)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5890)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2739)
>  at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2719)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7197)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7156)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7149)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2249)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35068)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2373)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4840) Failed to update statistics table

2018-08-08 Thread Aman Poonia (JIRA)
Aman Poonia created PHOENIX-4840:


 Summary:  Failed to update statistics table
 Key: PHOENIX-4840
 URL: https://issues.apache.org/jira/browse/PHOENIX-4840
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Aman Poonia


2018-08-08 08:29:08,020 ERROR [-update-statistics-0] stats.StatisticsScanner - 
Failed to update statistics table!
org.apache.hadoop.hbase.DoNotRetryIOException: hconnection-0x402d577e closed
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1186)
at 
org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
at 
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:289)
at 
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:164)
at 
org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:159)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:801)
at 
org.apache.phoenix.schema.stats.StatisticsWriter.deleteStatsForRegion(StatisticsWriter.java:267)
at 
org.apache.phoenix.schema.stats.StatisticsScanner$StatisticsScannerCallable.call(StatisticsScanner.java:156)
at 
org.apache.phoenix.schema.stats.StatisticsScanner$StatisticsScannerCallable.call(StatisticsScanner.java:141)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4839) IndexHalfStoreFileReaderGenerator throws NullPointerException

2018-08-08 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4839:
-
Affects Version/s: (was: 4.14.1)
   4.14.0

> IndexHalfStoreFileReaderGenerator throws NullPointerException
> -
>
> Key: PHOENIX-4839
> URL: https://issues.apache.org/jira/browse/PHOENIX-4839
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Priority: Major
>
> 018-08-08 09:15:25,075 FATAL [7,queue=3,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> phoenix1,60020,1533715370645: The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NullPointerException
> java.lang.NullPointerException
> at java.util.ArrayList.addAll(ArrayList.java:577)
> at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.getLocalIndexScanners(IndexHalfStoreFileReaderGenerator.java:398)
> at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.access$000(IndexHalfStoreFileReaderGenerator.java:73)
> at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.getScannersNoCompaction(IndexHalfStoreFileReaderGenerator.java:332)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214)
> at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:327)
> at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:326)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1335)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1734)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1330)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2169)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5916)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5890)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2739)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2719)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7197)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7156)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7149)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2249)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35068)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2373)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4839) IndexHalfStoreFileReaderGenerator throws NullPointerException

2018-08-08 Thread Aman Poonia (JIRA)
Aman Poonia created PHOENIX-4839:


 Summary: IndexHalfStoreFileReaderGenerator throws 
NullPointerException
 Key: PHOENIX-4839
 URL: https://issues.apache.org/jira/browse/PHOENIX-4839
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1
Reporter: Aman Poonia


018-08-08 09:15:25,075 FATAL [7,queue=3,port=60020] regionserver.HRegionServer 
- ABORTING region server phoenix1,60020,1533715370645: The coprocessor 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
java.lang.NullPointerException
java.lang.NullPointerException
at java.util.ArrayList.addAll(ArrayList.java:577)
at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.getLocalIndexScanners(IndexHalfStoreFileReaderGenerator.java:398)
at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.access$000(IndexHalfStoreFileReaderGenerator.java:73)
at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.getScannersNoCompaction(IndexHalfStoreFileReaderGenerator.java:332)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214)
at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:327)
at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:326)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1335)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1734)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1330)
at 
org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2169)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5916)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5890)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2739)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2719)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7197)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7156)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7149)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2249)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35068)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2373)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4837) Update deprecated API to the new one. Also make the code a bit java 7 style

2018-08-07 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4837:
-
Attachment: (was: PHOENIX-4837.patch)

> Update deprecated API to the new one. Also make the code a bit java 7 style
> ---
>
> Key: PHOENIX-4837
> URL: https://issues.apache.org/jira/browse/PHOENIX-4837
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Aman Poonia
>Priority: Minor
> Attachments: PHOENIX-4837.4.x-HBase-1.3.001.patch
>
>
> Currently we are using a few deprecated HBase API's in LocalIndexing classes. 
> Also Some of the Autoclosable Resources are being closed manually. Improve 
> the code to make it bit more cleaner and updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4837) Update deprecated API to the new one. Also make the code a bit java 7 style

2018-08-07 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4837:
-
Attachment: PHOENIX-4837.4.x-HBase-1.3.001.patch

> Update deprecated API to the new one. Also make the code a bit java 7 style
> ---
>
> Key: PHOENIX-4837
> URL: https://issues.apache.org/jira/browse/PHOENIX-4837
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Aman Poonia
>Priority: Minor
> Attachments: PHOENIX-4837.4.x-HBase-1.3.001.patch, PHOENIX-4837.patch
>
>
> Currently we are using a few deprecated HBase API's in LocalIndexing classes. 
> Also Some of the Autoclosable Resources are being closed manually. Improve 
> the code to make it bit more cleaner and updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4837) Update deprecated API to the new one. Also make the code a bit java 7 style

2018-08-07 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4837:
-
Attachment: PHOENIX-4837.patch

> Update deprecated API to the new one. Also make the code a bit java 7 style
> ---
>
> Key: PHOENIX-4837
> URL: https://issues.apache.org/jira/browse/PHOENIX-4837
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Aman Poonia
>Priority: Minor
> Attachments: PHOENIX-4837.patch
>
>
> Currently we are using a few deprecated HBase API's in LocalIndexing classes. 
> Also Some of the Autoclosable Resources are being closed manually. Improve 
> the code to make it bit more cleaner and updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4837) Update deprecated API to the new one. Also make the code a bit java 7 style

2018-08-07 Thread Aman Poonia (JIRA)
Aman Poonia created PHOENIX-4837:


 Summary: Update deprecated API to the new one. Also make the code 
a bit java 7 style
 Key: PHOENIX-4837
 URL: https://issues.apache.org/jira/browse/PHOENIX-4837
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.15.0
Reporter: Aman Poonia


Currently we are using a few deprecated HBase API's in LocalIndexing classes. 
Also Some of the Autoclosable Resources are being closed manually. Improve the 
code to make it bit more cleaner and updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)