[jira] [Updated] (PHOENIX-5070) NPE when upgrading Phoenix 4.13.0 to Phoenix 4.14.1 with hbase-1.x branch in secure setup

2018-12-18 Thread Monani Mihir (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Monani Mihir updated PHOENIX-5070:
--
Attachment: PHOENIX-5070-4.x-HBase-1.3.03.patch

> NPE when upgrading Phoenix 4.13.0 to Phoenix 4.14.1 with hbase-1.x branch in 
> secure setup
> -
>
> Key: PHOENIX-5070
> URL: https://issues.apache.org/jira/browse/PHOENIX-5070
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Monani Mihir
>Priority: Blocker
> Attachments: PHOENIX-5070-4.x-HBase-1.3.01.patch, 
> PHOENIX-5070-4.x-HBase-1.3.02.patch, PHOENIX-5070-4.x-HBase-1.3.03.patch, 
> PHOENIX-5070.patch
>
>
> PhoenixAccessController populates accessControllers during calls like 
> loadTable before it checks if current user has all required permission for 
> given Hbase table and schema. 
> With [Phoenix-4661|https://issues.apache.org/jira/browse/PHOENIX-4661] , We 
> somehow removed this for only preGetTable func call. Because of this when we 
> upgrade Phoenix from 4.13.0 to 4.14.1 , we get NPE for accessControllers in 
> PhoenixAccessController#getUserPermissions. 
>  Here is exception stack trace :- 
>  
> {code:java}
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
>  org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NullPointerException
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:598)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16357)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8354)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2208)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2190)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35076)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:409)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:403)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1760)
> at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:453)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:434)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:210)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.getUserPermissions(PhoenixAccessController.java:403)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:482)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preGetTable(PhoenixAccessController.java:104)
> at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.call(PhoenixMetaDataCoprocessorHost.java:161)
> at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:81)
> at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preGetTable(PhoenixMetaDataCoprocessorHost.java:157)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:563)
> ... 9 more
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1291)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:231)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:340)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:35542)
> at 
> 

[jira] [Assigned] (PHOENIX-5072) Cursor Query Loops Eternally with Local Index, Returns Fine Without It

2018-12-18 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5072:
---

Assignee: Swaroopa Kadam

> Cursor Query Loops Eternally with Local Index, Returns Fine Without It
> --
>
> Key: PHOENIX-5072
> URL: https://issues.apache.org/jira/browse/PHOENIX-5072
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Jack Steenkamp
>Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: PhoenixEternalCursorTest.java
>
>
>  
> I have come across a case where a particular cursor query would carry on 
> looping forever if executed when a local index is present. If however, I 
> execute the same query without a local index on the table, then it works as 
> expected.
> You can reproduce this by executing the attached  standalone test case. You 
> only need to modify the JDBC_URL constant (by default it tries to connect to 
> localhost) and then you compare the outputs between hving CREATE_INDEX = true 
> or CREATE_INDEX = false.
> Here is an example of the output: 
> *1) Connect to an environment and create a simple table:*
> {code:java}
> Connecting To : jdbc:phoenix:localhost:63214{code}
> {code:java}
> CREATE TABLE IF NOT EXISTS SOME_NUMBERS
> (
>    ID                             VARCHAR    NOT NULL,
>    NAME                           VARCHAR    ,
>    ANOTHER_VALUE                  VARCHAR    ,
>    TRANSACTION_TIME               TIMESTAMP  ,
>    CONSTRAINT pk PRIMARY KEY(ID)
> ) IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN,
> UPDATE_CACHE_FREQUENCY=90,
> COLUMN_ENCODED_BYTES=NONE,
> IMMUTABLE_ROWS=true{code}
> *2) Optionally create a local index:*
>  
> If you want to reproduce the failure, create an index:
> {code:java}
> CREATE LOCAL INDEX index_01 ON SOME_NUMBERS(NAME, TRANSACTION_TIME DESC) 
> INCLUDE(ANOTHER_VALUE){code}
> Otherwise, skip this.
> *3) Insert a number of objects and verify their count*
> {code:java}
> System.out.println("\nInserting Some Items");
> DecimalFormat dmf = new DecimalFormat("");
> final String prefix = "ReferenceData.Country/";
> for (int i = 0; i < 5; i++)
> {
>   for (int j = 0; j < 2; j++)
>   {
> PreparedStatement prstmt = conn.prepareStatement("UPSERT INTO 
> SOME_NUMBERS VALUES(?,?,?,?)");
> prstmt.setString(1,UUID.randomUUID().toString());
> prstmt.setString(2,prefix + dmf.format(i));
> prstmt.setString(3,UUID.randomUUID().toString());
> prstmt.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
> prstmt.execute();
> conn.commit();3) Insert a number of objects and verify their count_
> prstmt.close();
>   }
> }{code}
> Verify the count afterwards with: 
> {code:java}
> SELECT COUNT(1) AS TOTAL_ITEMS FROM SOME_NUMBERS {code}
> *5) Run a Cursor Query*
> Run a cursor using the standard sequence of commands as appropriate:
> {code:java}
> DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM SOME_NUMBERS where 
> NAME like 'ReferenceData.Country/%' ORDER BY TRANSACTION_TIME DESC{code}
> {code:java}
> OPEN MyCursor{code}
> {code:java}
> FETCH NEXT 10 ROWS FROM MyCursor{code}
>  * Without an index it will return the correct number of rows
> {code:java}
> Cursor SQL : DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM 
> SOME_NUMBERS where NAME like 'ReferenceData.Country/%' ORDER BY 
> TRANSACTION_TIME DESC
> CLOSING THE CURSOR
> Result : 0
> ITEMS returned by count : 10 | Items Returned by Cursor : 10
> ALL GOOD - No Exception{code}
>  * With an index it will return far more than the number of rows (it appears 
> to be erroneously looping for ever - hence the test-case terminates it).
> {code:java}
> Cursor SQL : DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM 
> SOME_NUMBERS where NAME like 'ReferenceData.Country/%' ORDER BY 
> TRANSACTION_TIME DESC
> ITEMS returned by count : 10 | Items Returned by Cursor : 40
> Aborting the Cursor, as it is more than the count!
> Exception in thread "main" java.lang.RuntimeException: The cursor returned a 
> different number of rows from the count !! {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-12-18 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-4993:

Attachment: PHOENIX-4993-4.x-HBase-1.3.03.patch

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5072) Cursor Query Loops Eternally with Local Index, Returns Fine Without It

2018-12-18 Thread Jack Steenkamp (JIRA)
Jack Steenkamp created PHOENIX-5072:
---

 Summary: Cursor Query Loops Eternally with Local Index, Returns 
Fine Without It
 Key: PHOENIX-5072
 URL: https://issues.apache.org/jira/browse/PHOENIX-5072
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1
Reporter: Jack Steenkamp
 Attachments: PhoenixEternalCursorTest.java

 

I have come across a case where a particular cursor query would carry on 
looping forever if executed when a local index is present. If however, I 
execute the same query without a local index on the table, then it works as 
expected.

You can reproduce this by executing the attached  standalone test case. You 
only need to modify the JDBC_URL constant (by default it tries to connect to 
localhost) and then you compare the outputs between hving CREATE_INDEX = true 
or CREATE_INDEX = false.

Here is an example of the output: 

*1) Connect to an environment and create a simple table:*
{code:java}
Connecting To : jdbc:phoenix:localhost:63214{code}
{code:java}
CREATE TABLE IF NOT EXISTS SOME_NUMBERS
(
   ID                             VARCHAR    NOT NULL,
   NAME                           VARCHAR    ,
   ANOTHER_VALUE                  VARCHAR    ,
   TRANSACTION_TIME               TIMESTAMP  ,
   CONSTRAINT pk PRIMARY KEY(ID)
) IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN,
UPDATE_CACHE_FREQUENCY=90,
COLUMN_ENCODED_BYTES=NONE,
IMMUTABLE_ROWS=true{code}
*2) Optionally create a local index:*

 

If you want to reproduce the failure, create an index:
{code:java}
CREATE LOCAL INDEX index_01 ON SOME_NUMBERS(NAME, TRANSACTION_TIME DESC) 
INCLUDE(ANOTHER_VALUE){code}
Otherwise, skip this.

*3) Insert a number of objects and verify their count*
{code:java}
System.out.println("\nInserting Some Items");
DecimalFormat dmf = new DecimalFormat("");
final String prefix = "ReferenceData.Country/";
for (int i = 0; i < 5; i++)
{
  for (int j = 0; j < 2; j++)
  {
PreparedStatement prstmt = conn.prepareStatement("UPSERT INTO SOME_NUMBERS 
VALUES(?,?,?,?)");
prstmt.setString(1,UUID.randomUUID().toString());
prstmt.setString(2,prefix + dmf.format(i));
prstmt.setString(3,UUID.randomUUID().toString());
prstmt.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
prstmt.execute();
conn.commit();3) Insert a number of objects and verify their count_
prstmt.close();
  }
}{code}
Verify the count afterwards with: 
{code:java}
SELECT COUNT(1) AS TOTAL_ITEMS FROM SOME_NUMBERS {code}
*5) Run a Cursor Query*

Run a cursor using the standard sequence of commands as appropriate:
{code:java}
DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM SOME_NUMBERS where 
NAME like 'ReferenceData.Country/%' ORDER BY TRANSACTION_TIME DESC{code}
{code:java}
OPEN MyCursor{code}
{code:java}
FETCH NEXT 10 ROWS FROM MyCursor{code}
 * Without an index it will return the correct number of rows

{code:java}
Cursor SQL : DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM 
SOME_NUMBERS where NAME like 'ReferenceData.Country/%' ORDER BY 
TRANSACTION_TIME DESC
CLOSING THE CURSOR
Result : 0
ITEMS returned by count : 10 | Items Returned by Cursor : 10
ALL GOOD - No Exception{code}
 * With an index it will return far more than the number of rows (it appears to 
be erroneously looping for ever - hence the test-case terminates it).

{code:java}
Cursor SQL : DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM 
SOME_NUMBERS where NAME like 'ReferenceData.Country/%' ORDER BY 
TRANSACTION_TIME DESC
ITEMS returned by count : 10 | Items Returned by Cursor : 40
Aborting the Cursor, as it is more than the count!
Exception in thread "main" java.lang.RuntimeException: The cursor returned a 
different number of rows from the count !! {code}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)