[jira] [Updated] (PHOENIX-5774) Phoenix Mapreduce job over hbase snapshots is extremely inefficient.

2020-03-18 Thread Xu Cang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5774:
-
External issue ID: PHOENIX-4997

> Phoenix Mapreduce job over hbase snapshots is extremely inefficient.
> 
>
> Key: PHOENIX-5774
> URL: https://issues.apache.org/jira/browse/PHOENIX-5774
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Rushabh Shah
>Assignee: Xu Cang
>Priority: Major
>
> Internally we have tenant estimation framework which calculates the number of 
> rows each tenant occupy in the cluster. Basically what the framework does is 
> it launch MapReduce(MR) job per table and run the following query : "Select 
> tenant_id from " and we do count over this tenant_id in reducer 
> phase.
>  Earlier we use to run this query against live table but we found meta table 
> was getting hammered over the time this job was running so we thought to run 
> the MR job on hbase snapshots instead of live table. Take advantage of this 
> feature: https://issues.apache.org/jira/browse/PHOENIX-3744
> When we were querying live table, the MR job for one of the biggest table in 
> sandbox cluster took around 2.5 hours.
>  After we started using hbase snapshots, the MR job for the same table took 
> 135 hours. We have maximum concurrent running mapper limit to 15 to avoid 
> hammering meta table when we were querying live tables. We didn't remove that 
> restriction after we moved to hbase snapshots.So ideally it shouldn't take 
> 135 hours to complete if we don't have that restriction.
> Some statistics about that table:
>  Size: -578 GB- 2.70 TB, Num Regions in that table: -161- 670
> The average map time took 3 mins 11 seconds when querying live table.
>  The average map time took 5 hours 33 minutes when querying hbase snapshots.
> The issue is we don't consider snapshots while generating splits. So during 
> map phase, each map task has to go through all regions in snapshots to 
> determine which region has the start and end key assigned to that task. After 
> determining all regions, it has to open each region to scan all hfiles in 
> that region. In one such map task, the start and end key from split was 
> distributed among 289 regions(from snapshot not live table). Reading from 
> each region took an average of 90 seconds, so for 289 regions it took 
> approximately 7 hours.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5774) Phoenix Mapreduce job over hbase snapshots is extremely inefficient.

2020-03-18 Thread Xu Cang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang reassigned PHOENIX-5774:


Assignee: Xu Cang

> Phoenix Mapreduce job over hbase snapshots is extremely inefficient.
> 
>
> Key: PHOENIX-5774
> URL: https://issues.apache.org/jira/browse/PHOENIX-5774
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Rushabh Shah
>Assignee: Xu Cang
>Priority: Major
>
> Internally we have tenant estimation framework which calculates the number of 
> rows each tenant occupy in the cluster. Basically what the framework does is 
> it launch MapReduce(MR) job per table and run the following query : "Select 
> tenant_id from " and we do count over this tenant_id in reducer 
> phase.
>  Earlier we use to run this query against live table but we found meta table 
> was getting hammered over the time this job was running so we thought to run 
> the MR job on hbase snapshots instead of live table. Take advantage of this 
> feature: https://issues.apache.org/jira/browse/PHOENIX-3744
> When we were querying live table, the MR job for one of the biggest table in 
> sandbox cluster took around 2.5 hours.
>  After we started using hbase snapshots, the MR job for the same table took 
> 135 hours. We have maximum concurrent running mapper limit to 15 to avoid 
> hammering meta table when we were querying live tables. We didn't remove that 
> restriction after we moved to hbase snapshots.So ideally it shouldn't take 
> 135 hours to complete if we don't have that restriction.
> Some statistics about that table:
>  Size: -578 GB- 2.70 TB, Num Regions in that table: -161- 670
> The average map time took 3 mins 11 seconds when querying live table.
>  The average map time took 5 hours 33 minutes when querying hbase snapshots.
> The issue is we don't consider snapshots while generating splits. So during 
> map phase, each map task has to go through all regions in snapshots to 
> determine which region has the start and end key assigned to that task. After 
> determining all regions, it has to open each region to scan all hfiles in 
> that region. In one such map task, the start and end key from split was 
> distributed among 289 regions(from snapshot not live table). Reading from 
> each region took an average of 90 seconds, so for 289 regions it took 
> approximately 7 hours.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5570) Delete fails to delete data with null value in last column of PK. (all columns are in PK)

2019-11-13 Thread Xu Cang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5570:
-
Affects Version/s: (was: 4.13.1)
   4.15.1

> Delete fails to delete data with null value in last column of PK. (all 
> columns are in PK)
> -
>
> Key: PHOENIX-5570
> URL: https://issues.apache.org/jira/browse/PHOENIX-5570
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Xu Cang
>Priority: Major
>
> Phoenix delete fails to delete row in below scenario:
> All columns are in PK, last PK column has null value in row.
>  
>  
> {code:java}
> 1. create table:
> CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK3 (
>     TENANT_ID CHAR(15) NOT NULL,
>     GLOBAL_PARTY_ID VARCHAR,
>     GLOBAL_INPUT_ID VARCHAR,
>         CONSTRAINT PK PRIMARY KEY
> (TENANT_ID,
>     GLOBAL_PARTY_ID,
>   GLOBAL_INPUT_ID DESC
> )    ) MULTI_TENANT=true;
>  
> 2 upsert data
> UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK3 (TENANT_ID, GLOBAL_PARTY_ID) 
> VALUES('000DEL3','party1');
> 3. delete data
> DELETE from TEST.KINGDOMTABLEWITHNULLPK3 where TENANT_ID='000DEL3'AND 
> GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL ;
>  
> 4. verify if data is deleted:(in this case, no)
> 0: > select * from TEST.KINGDOMTABLEWITHNULLPK3;
> -
>     TENANT_ID                GLOBAL_PARTY_ID                          
> GLOBAL_INPUT_ID             
> -
> 000DEL3 party1                                                        
>                     
> -
>  
> {code}
> ===
> While if I add another column in PK, it will work
> {code:java}
> 1. Create table
> CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK4 (
>     TENANT_ID CHAR(15) NOT NULL,
>     GLOBAL_PARTY_ID VARCHAR,
>     GLOBAL_INPUT_ID VARCHAR,
>     TRAN_ID VARCHAR,
>         CONSTRAINT PK PRIMARY KEY
> (TENANT_ID,
>     GLOBAL_PARTY_ID,
>   GLOBAL_INPUT_ID DESC,
> TRAN_ID
> )    ) MULTI_TENANT=true;
>  
> 2. Upsert data
> UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK4 (TENANT_ID, GLOBAL_PARTY_ID,TRAN_ID) 
> VALUES('000DEL3','party1’,’1’);
>  
> 3. Delete data
> delete from TEST.KINGDOMTABLEWITHNULLPK4 where TENANT_ID='000DEL3'AND 
> GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL and TRAN_ID=‘1’ ;
> 4. check if data is deleted, in this case, Yes
>  select * from TEST.KINGDOMTABLEWITHNULLPK4;
> -+
>     TENANT_ID                GLOBAL_PARTY_ID                          
> GLOBAL_INPUT_ID                              TRAN_ID                 
> -+
> -+
>  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5570) Delete fails to delete data with null value in last column of PK. (all columns are in PK)

2019-11-13 Thread Xu Cang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5570:
-
Affects Version/s: 4.13.1
   4.14.3

> Delete fails to delete data with null value in last column of PK. (all 
> columns are in PK)
> -
>
> Key: PHOENIX-5570
> URL: https://issues.apache.org/jira/browse/PHOENIX-5570
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1, 4.14.3
>Reporter: Xu Cang
>Priority: Major
>
> Phoenix delete fails to delete row in below scenario:
> All columns are in PK, last PK column has null value in row.
>  
>  
> {code:java}
> 1. create table:
> CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK3 (
>     TENANT_ID CHAR(15) NOT NULL,
>     GLOBAL_PARTY_ID VARCHAR,
>     GLOBAL_INPUT_ID VARCHAR,
>         CONSTRAINT PK PRIMARY KEY
> (TENANT_ID,
>     GLOBAL_PARTY_ID,
>   GLOBAL_INPUT_ID DESC
> )    ) MULTI_TENANT=true;
>  
> 2 upsert data
> UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK3 (TENANT_ID, GLOBAL_PARTY_ID) 
> VALUES('000DEL3','party1');
> 3. delete data
> DELETE from TEST.KINGDOMTABLEWITHNULLPK3 where TENANT_ID='000DEL3'AND 
> GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL ;
>  
> 4. verify if data is deleted:(in this case, no)
> 0: > select * from TEST.KINGDOMTABLEWITHNULLPK3;
> -
>     TENANT_ID                GLOBAL_PARTY_ID                          
> GLOBAL_INPUT_ID             
> -
> 000DEL3 party1                                                        
>                     
> -
>  
> {code}
> ===
> While if I add another column in PK, it will work
> {code:java}
> 1. Create table
> CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK4 (
>     TENANT_ID CHAR(15) NOT NULL,
>     GLOBAL_PARTY_ID VARCHAR,
>     GLOBAL_INPUT_ID VARCHAR,
>     TRAN_ID VARCHAR,
>         CONSTRAINT PK PRIMARY KEY
> (TENANT_ID,
>     GLOBAL_PARTY_ID,
>   GLOBAL_INPUT_ID DESC,
> TRAN_ID
> )    ) MULTI_TENANT=true;
>  
> 2. Upsert data
> UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK4 (TENANT_ID, GLOBAL_PARTY_ID,TRAN_ID) 
> VALUES('000DEL3','party1’,’1’);
>  
> 3. Delete data
> delete from TEST.KINGDOMTABLEWITHNULLPK4 where TENANT_ID='000DEL3'AND 
> GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL and TRAN_ID=‘1’ ;
> 4. check if data is deleted, in this case, Yes
>  select * from TEST.KINGDOMTABLEWITHNULLPK4;
> -+
>     TENANT_ID                GLOBAL_PARTY_ID                          
> GLOBAL_INPUT_ID                              TRAN_ID                 
> -+
> -+
>  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5570) Delete fails to delete data with null value in last column of PK. (all columns are in PK)

2019-11-13 Thread Xu Cang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5570:
-
Description: 
Phoenix delete fails to delete row in below scenario:

All columns are in PK, last PK column has null value in row.

 

 
{code:java}
1. create table:
CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK3 (
    TENANT_ID CHAR(15) NOT NULL,
    GLOBAL_PARTY_ID VARCHAR,
    GLOBAL_INPUT_ID VARCHAR,
        CONSTRAINT PK PRIMARY KEY
(TENANT_ID,
    GLOBAL_PARTY_ID,
  GLOBAL_INPUT_ID DESC
)    ) MULTI_TENANT=true;
 
2 upsert data
UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK3 (TENANT_ID, GLOBAL_PARTY_ID) 
VALUES('000DEL3','party1');

3. delete data
DELETE from TEST.KINGDOMTABLEWITHNULLPK3 where TENANT_ID='000DEL3'AND 
GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL ;
 
4. verify if data is deleted:(in this case, no)
0: > select * from TEST.KINGDOMTABLEWITHNULLPK3;
-

    TENANT_ID                GLOBAL_PARTY_ID                          
GLOBAL_INPUT_ID             
-

000DEL3 party1                                                          
                  
-

 
{code}
===

While if I add another column in PK, it will work
{code:java}
1. Create table
CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK4 (
    TENANT_ID CHAR(15) NOT NULL,
    GLOBAL_PARTY_ID VARCHAR,
    GLOBAL_INPUT_ID VARCHAR,
    TRAN_ID VARCHAR,
        CONSTRAINT PK PRIMARY KEY
(TENANT_ID,
    GLOBAL_PARTY_ID,
  GLOBAL_INPUT_ID DESC,
TRAN_ID
)    ) MULTI_TENANT=true;
 
2. Upsert data
UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK4 (TENANT_ID, GLOBAL_PARTY_ID,TRAN_ID) 
VALUES('000DEL3','party1’,’1’);
 
3. Delete data
delete from TEST.KINGDOMTABLEWITHNULLPK4 where TENANT_ID='000DEL3'AND 
GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL and TRAN_ID=‘1’ ;

4. check if data is deleted, in this case, Yes
 select * from TEST.KINGDOMTABLEWITHNULLPK4;
-+

    TENANT_ID                GLOBAL_PARTY_ID                          
GLOBAL_INPUT_ID                              TRAN_ID                 
-+
-+

 
{code}

  was:
Phoenix delete fails to delete row in below scenario:

All columns are in PK, last PK column has null value in row.

 

 
{code:java}
1. create table:
CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK3 (
    TENANT_ID CHAR(15) NOT NULL,
    GLOBAL_PARTY_ID VARCHAR,
    GLOBAL_INPUT_ID VARCHAR,
        CONSTRAINT PK PRIMARY KEY
(TENANT_ID,
    GLOBAL_PARTY_ID,
  GLOBAL_INPUT_ID DESC
)    ) MULTI_TENANT=true;
 
2 upsert data
UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK3 (TENANT_ID, GLOBAL_PARTY_ID) 
VALUES('000DEL3','party1');

3. delete data
DELETE from TEST.KINGDOMTABLEWITHNULLPK3 where TENANT_ID='000DEL3'AND 
GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL ;
 
4. verify if data is deleted:(in this case, no)
0: > select * from TEST.KINGDOMTABLEWITHNULLPK3;
-

    TENANT_ID                GLOBAL_PARTY_ID                          
GLOBAL_INPUT_ID             
-

000DEL3 party1                                                          
                  
-

 
{code}


> Delete fails to delete data with null value in last column of PK. (all 
> columns are in PK)
> -
>
> Key: PHOENIX-5570
> URL: https://issues.apache.org/jira/browse/PHOENIX-5570
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Xu Cang
>Priority: Major
>
> Phoenix delete fails to delete row in below scenario:
> All columns are in PK, last PK column has null value in row.
>  
>  
> {code:java}
> 1. create table:
> CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK3 (
>     TENANT_ID CHAR(15) NOT NULL,
>     GLOBAL_PARTY_ID VARCHAR,
>     GLOBAL_INPUT_ID VARCHAR,
>         CONSTRAINT

[jira] [Updated] (PHOENIX-5570) Delete fails to delete data with null value in last column of PK. (all columns are in PK)

2019-11-13 Thread Xu Cang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5570:
-
Description: 
Phoenix delete fails to delete row in below scenario:

All columns are in PK, last PK column has null value in row.

 

 
{code:java}
1. create table:
CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK3 (
    TENANT_ID CHAR(15) NOT NULL,
    GLOBAL_PARTY_ID VARCHAR,
    GLOBAL_INPUT_ID VARCHAR,
        CONSTRAINT PK PRIMARY KEY
(TENANT_ID,
    GLOBAL_PARTY_ID,
  GLOBAL_INPUT_ID DESC
)    ) MULTI_TENANT=true;
 
2 upsert data
UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK3 (TENANT_ID, GLOBAL_PARTY_ID) 
VALUES('000DEL3','party1');

3. delete data
DELETE from TEST.KINGDOMTABLEWITHNULLPK3 where TENANT_ID='000DEL3'AND 
GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL ;
 
4. verify if data is deleted:(in this case, no)
0: > select * from TEST.KINGDOMTABLEWITHNULLPK3;
-

    TENANT_ID                GLOBAL_PARTY_ID                          
GLOBAL_INPUT_ID             
-

000DEL3 party1                                                          
                  
-

 
{code}

  was:
Phoenix delete fails to delete row in below scenario:

All columns are in PK, last PK column has null value in row.

 

 
{code:java}
CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK3 (
    TENANT_ID CHAR(15) NOT NULL,
    GLOBAL_PARTY_ID VARCHAR,
    GLOBAL_INPUT_ID VARCHAR,
        CONSTRAINT PK PRIMARY KEY
(TENANT_ID,
    GLOBAL_PARTY_ID,
  GLOBAL_INPUT_ID DESC
)    ) MULTI_TENANT=true;
 
UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK3 (TENANT_ID, GLOBAL_PARTY_ID) 
VALUES('000DEL3','party1');
DELETE from TEST.KINGDOMTABLEWITHNULLPK3 where TENANT_ID='000DEL3'AND 
GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL ;
 
0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf> select * from 
TEST.KINGDOMTABLEWITHNULLPK3;
-

    TENANT_ID                GLOBAL_PARTY_ID                          
GLOBAL_INPUT_ID             
-

000DEL3 party1                                                          
                  
-
0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf>
 
 
===
But if there is one column after the GLOBAL_INPUT_ID column, delete works, as 
shown below.
 
CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK4 (
    TENANT_ID CHAR(15) NOT NULL,
    GLOBAL_PARTY_ID VARCHAR,
    GLOBAL_INPUT_ID VARCHAR,
    TRAN_ID VARCHAR,
        CONSTRAINT PK PRIMARY KEY
(TENANT_ID,
    GLOBAL_PARTY_ID,
  GLOBAL_INPUT_ID DESC,
TRAN_ID
)    ) MULTI_TENANT=true;
 
UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK4 (TENANT_ID, GLOBAL_PARTY_ID,TRAN_ID) 
VALUES('000DEL3','party1’,’1’);
 
delete from TEST.KINGDOMTABLEWITHNULLPK4 where TENANT_ID='000DEL3'AND 
GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL and TRAN_ID=‘1’ ;
0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf> select * from 
TEST.KINGDOMTABLEWITHNULLPK4;
-+

    TENANT_ID                GLOBAL_PARTY_ID                          
GLOBAL_INPUT_ID                              TRAN_ID                 
-+
-+
0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf>
{code}
 


> Delete fails to delete data with null value in last column of PK. (all 
> columns are in PK)
> -
>
> Key: PHOENIX-5570
> URL: https://issues.apache.org/jira/browse/PHOENIX-5570
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Xu Cang
>Priority: Major
>
> Phoenix delete fails to delete row in below scenario:
> All columns are in PK, last PK column has null value in row.
>  
>  
> {code:java}
> 1. create table:
> CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK3 (
>     TENANT_ID CHAR(15) NOT NULL,
>     GLOBAL_PARTY_ID VARCHAR,
>     GLOBAL_INPUT_ID VA

[jira] [Updated] (PHOENIX-5570) Delete fails to delete data with null value in last column of PK. (all columns are in PK)

2019-11-13 Thread Xu Cang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5570:
-
Description: 
Phoenix delete fails to delete row in below scenario:

All columns are in PK, last PK column has null value in row.

 

 
{code:java}
CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK3 (
    TENANT_ID CHAR(15) NOT NULL,
    GLOBAL_PARTY_ID VARCHAR,
    GLOBAL_INPUT_ID VARCHAR,
        CONSTRAINT PK PRIMARY KEY
(TENANT_ID,
    GLOBAL_PARTY_ID,
  GLOBAL_INPUT_ID DESC
)    ) MULTI_TENANT=true;
 
UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK3 (TENANT_ID, GLOBAL_PARTY_ID) 
VALUES('000DEL3','party1');
DELETE from TEST.KINGDOMTABLEWITHNULLPK3 where TENANT_ID='000DEL3'AND 
GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL ;
 
0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf> select * from 
TEST.KINGDOMTABLEWITHNULLPK3;
-

    TENANT_ID                GLOBAL_PARTY_ID                          
GLOBAL_INPUT_ID             
-

000DEL3 party1                                                          
                  
-
0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf>
 
 
===
But if there is one column after the GLOBAL_INPUT_ID column, delete works, as 
shown below.
 
CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK4 (
    TENANT_ID CHAR(15) NOT NULL,
    GLOBAL_PARTY_ID VARCHAR,
    GLOBAL_INPUT_ID VARCHAR,
    TRAN_ID VARCHAR,
        CONSTRAINT PK PRIMARY KEY
(TENANT_ID,
    GLOBAL_PARTY_ID,
  GLOBAL_INPUT_ID DESC,
TRAN_ID
)    ) MULTI_TENANT=true;
 
UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK4 (TENANT_ID, GLOBAL_PARTY_ID,TRAN_ID) 
VALUES('000DEL3','party1’,’1’);
 
delete from TEST.KINGDOMTABLEWITHNULLPK4 where TENANT_ID='000DEL3'AND 
GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL and TRAN_ID=‘1’ ;
0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf> select * from 
TEST.KINGDOMTABLEWITHNULLPK4;
-+

    TENANT_ID                GLOBAL_PARTY_ID                          
GLOBAL_INPUT_ID                              TRAN_ID                 
-+
-+
0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf>
{code}
 

  was:
Phoenix delete fails to delete row in below scenario:

All columns are in PK, last PK column has null value in row.

 

 

CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK3 (

    TENANT_ID CHAR(15) NOT NULL,

    GLOBAL_PARTY_ID VARCHAR,

    GLOBAL_INPUT_ID VARCHAR,

        CONSTRAINT PK PRIMARY KEY

(TENANT_ID,

    GLOBAL_PARTY_ID,

  GLOBAL_INPUT_ID DESC

)    ) MULTI_TENANT=true;

 

UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK3 (TENANT_ID, GLOBAL_PARTY_ID) 
VALUES('000DEL3','party1');

 

delete from TEST.KINGDOMTABLEWITHNULLPK3 where TENANT_ID='000DEL3'AND 
GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL ;

 

0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf> select * from 
TEST.KINGDOMTABLEWITHNULLPK3;

+-+--+--+

|    TENANT_ID    |             GLOBAL_PARTY_ID              |             
GLOBAL_INPUT_ID              |

+-+--+--+

| 000DEL3 | party1                                   |                  
                        |

+-+--+--+

0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf>

 

 

===

But if there is one column after the GLOBAL_INPUT_ID column, delete works, as 
shown below.

 

CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK4 (

    TENANT_ID CHAR(15) NOT NULL,

    GLOBAL_PARTY_ID VARCHAR,

    GLOBAL_INPUT_ID VARCHAR,

    TRAN_ID VARCHAR,

        CONSTRAINT PK PRIMARY KEY

(TENANT_ID,

    GLOBAL_PARTY_ID,

  GLOBAL_INPUT_ID DESC,

TRAN_ID

)    ) MULTI_TENANT=true;

 

UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK4 (TENANT_ID, GLOBAL_PARTY_ID,TRAN_ID) 
VALUES('000DEL3','party1’,’1’);

 

delete from TEST.KINGDOMTABLEWITHNULLPK4 where TENANT_ID='000DEL3'AND 
GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL and TRAN_ID=‘1’ ;

0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng

[jira] [Created] (PHOENIX-5570) Delete fails to delete data with null value in last column of PK. (all columns are in PK)

2019-11-13 Thread Xu Cang (Jira)
Xu Cang created PHOENIX-5570:


 Summary: Delete fails to delete data with null value in last 
column of PK. (all columns are in PK)
 Key: PHOENIX-5570
 URL: https://issues.apache.org/jira/browse/PHOENIX-5570
 Project: Phoenix
  Issue Type: Bug
Reporter: Xu Cang


Phoenix delete fails to delete row in below scenario:

All columns are in PK, last PK column has null value in row.

 

 

CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK3 (

    TENANT_ID CHAR(15) NOT NULL,

    GLOBAL_PARTY_ID VARCHAR,

    GLOBAL_INPUT_ID VARCHAR,

        CONSTRAINT PK PRIMARY KEY

(TENANT_ID,

    GLOBAL_PARTY_ID,

  GLOBAL_INPUT_ID DESC

)    ) MULTI_TENANT=true;

 

UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK3 (TENANT_ID, GLOBAL_PARTY_ID) 
VALUES('000DEL3','party1');

 

delete from TEST.KINGDOMTABLEWITHNULLPK3 where TENANT_ID='000DEL3'AND 
GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL ;

 

0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf> select * from 
TEST.KINGDOMTABLEWITHNULLPK3;

+-+--+--+

|    TENANT_ID    |             GLOBAL_PARTY_ID              |             
GLOBAL_INPUT_ID              |

+-+--+--+

| 000DEL3 | party1                                   |                  
                        |

+-+--+--+

0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf>

 

 

===

But if there is one column after the GLOBAL_INPUT_ID column, delete works, as 
shown below.

 

CREATE TABLE IF NOT EXISTS TEST.KINGDOMTABLEWITHNULLPK4 (

    TENANT_ID CHAR(15) NOT NULL,

    GLOBAL_PARTY_ID VARCHAR,

    GLOBAL_INPUT_ID VARCHAR,

    TRAN_ID VARCHAR,

        CONSTRAINT PK PRIMARY KEY

(TENANT_ID,

    GLOBAL_PARTY_ID,

  GLOBAL_INPUT_ID DESC,

TRAN_ID

)    ) MULTI_TENANT=true;

 

UPSERT INTO TEST.KINGDOMTABLEWITHNULLPK4 (TENANT_ID, GLOBAL_PARTY_ID,TRAN_ID) 
VALUES('000DEL3','party1’,’1’);

 

delete from TEST.KINGDOMTABLEWITHNULLPK4 where TENANT_ID='000DEL3'AND 
GLOBAL_PARTY_ID='party1' AND GLOBAL_INPUT_ID  is NULL and TRAN_ID=‘1’ ;

0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf> select * from 
TEST.KINGDOMTABLEWITHNULLPK4;

+-+--+--+--+

|    TENANT_ID    |             GLOBAL_PARTY_ID              |             
GLOBAL_INPUT_ID              |                 TRAN_ID                  |

+-+--+--+--+

+-+--+--+--+

0: jdbc:phoenix:perf1hdaas-mnds2-1-prd.eng.sf>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New committer Swaroopa Kadam

2019-05-28 Thread Xu Cang
Congrats! :)

On Tue, May 28, 2019 at 4:18 PM Priyank Porwal 
wrote:

> Congrats Swaroopa!
>
> On Tue, May 28, 2019, 3:24 PM Andrew Purtell  wrote:
>
> > Congratulations Swaroopa!
> >
> > On Tue, May 28, 2019 at 2:38 PM Geoffrey Jacoby 
> > wrote:
> >
> > > On behalf of the Apache Phoenix PMC, I am pleased to announce that
> > Swaroopa
> > > Kadam has accepted our invitation to become a Phoenix committer.
> Swaroopa
> > > has contributed to a number of areas in the project, including the
> query
> > > server[1] and been an active participant in many code reviews for
> others'
> > > patches.
> > >
> > > Congratulations, Swaroopa, and we look forward to many more great
> > > contributions from you!
> > >
> > > Geoffrey Jacoby
> > >
> > > [1] -
> > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20PHOENIX%20AND%20status%20%3D%20Resolved%20AND%20assignee%20in%20(swaroopa)
> > >
> >
> >
> > --
> > Best regards,
> > Andrew
> >
> > Words like orphans lost among the crosstalk, meaning torn from truth's
> > decrepit hands
> >- A23, Crosstalk
> >
>


[jira] [Updated] (PHOENIX-5278) Add unit test to make sure drop/recreate of tenant view with added columns doesn't corrupt syscat

2019-05-10 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5278:
-
External issue URL:   (was: 
https://issues.apache.org/jira/browse/PHOENIX-3377)
 External issue ID: PHOENIX-3377

> Add unit test to make sure drop/recreate of tenant view with added columns 
> doesn't corrupt syscat
> -
>
> Key: PHOENIX-5278
> URL: https://issues.apache.org/jira/browse/PHOENIX-5278
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Saksham Gangwar
>Priority: Minor
>
> There have been scenarios similar to: deleting a tenant-specific view, 
> recreating the same tenant-specific view with new columns and while querying 
> the query fails with NPE over syscat due to corrupt data. View column count 
> is changed but Phoenix syscat table did not properly update this info which 
> causing querying the view always trigger null pointer exception. So the 
> addition of this unit test will help us further debug the exact issue of 
> corruption and give us confidence over this use case.
> Exception Stacktrace:
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: VIEW_NAME_ABC: at index 50
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:111)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:566)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6143)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3552)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3534)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException: at index 50
> at 
> com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)
> at com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)
> at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:548)
> at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)
> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1015)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:578)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3220)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3167)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:532)
> ... 10 more
>  
>  
> Related issue: https://issues.apache.org/jira/browse/PHOENIX-3377



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5278) Add unit test to make sure drop/recreate of tenant view with added columns doesn't corrupt syscat

2019-05-10 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5278:
-
External issue URL: https://issues.apache.org/jira/browse/PHOENIX-3377

> Add unit test to make sure drop/recreate of tenant view with added columns 
> doesn't corrupt syscat
> -
>
> Key: PHOENIX-5278
> URL: https://issues.apache.org/jira/browse/PHOENIX-5278
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Saksham Gangwar
>Priority: Minor
>
> There have been scenarios similar to: deleting a tenant-specific view, 
> recreating the same tenant-specific view with new columns and while querying 
> the query fails with NPE over syscat due to corrupt data. View column count 
> is changed but Phoenix syscat table did not properly update this info which 
> causing querying the view always trigger null pointer exception. So the 
> addition of this unit test will help us further debug the exact issue of 
> corruption and give us confidence over this use case.
> Exception Stacktrace:
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: VIEW_NAME_ABC: at index 50
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:111)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:566)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6143)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3552)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3534)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException: at index 50
> at 
> com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)
> at com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)
> at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:548)
> at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)
> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1015)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:578)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3220)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3167)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:532)
> ... 10 more
>  
>  
> Related issue: https://issues.apache.org/jira/browse/PHOENIX-3377



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4181) Drop tenant views columns when base view column is dropped

2019-04-24 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4181:
-
Description: 
# If you create a base table, a base view on the base table, and then a tenant 
view on the base view, when a base view column is dropped, it should get 
dropped from the tenant views as well.

This is currently not happening. See the attached test.

  was:
If you create a base table, a base view on the base table, and then a tenant 
view on the base view, when a base view column is dropped, it should get 
dropped from the tenant views as well.

This is currently not happening.  See the attached test.


> Drop tenant views columns when base view column is dropped
> --
>
> Key: PHOENIX-4181
> URL: https://issues.apache.org/jira/browse/PHOENIX-4181
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
>Priority: Major
>  Labels: SFDC
> Attachments: PHOENIX-4181_test.master.patch
>
>
> # If you create a base table, a base view on the base table, and then a 
> tenant view on the base view, when a base view column is dropped, it should 
> get dropped from the tenant views as well.
> This is currently not happening. See the attached test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5188) IndexedKeyValue should populate KeyValue fields

2019-03-13 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5188:
-
Attachment: PHOENIX-5188-4.x-HBase-1.4..addendum.patch

> IndexedKeyValue should populate KeyValue fields
> ---
>
> Key: PHOENIX-5188
> URL: https://issues.apache.org/jira/browse/PHOENIX-5188
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 4.15.0, 5.1
>
> Attachments: PHOENIX-5188-4.x-HBase-1.4..addendum.patch, 
> PHOENIX-5188-4.x-HBase-1.4.patch, PHOENIX-5188.patch
>
>
> IndexedKeyValue subclasses the HBase KeyValue class, which has three primary 
> fields: bytes, offset, and length. These fields aren't populated by 
> IndexedKeyValue because it's concerned with index mutations, and has its own 
> fields that its own methods use. 
> However, KeyValue and its Cell interface have quite a few methods that assume 
> these fields are populated, and the HBase-level factory methods generally 
> ensure they're populated. Phoenix code should do the same, to maintain the 
> polymorphic contract. This is important in cases like custom 
> ReplicationEndpoints where HBase-level code may be iterating over WALEdits 
> that contain both KeyValues and IndexKeyValues and may need to interrogate 
> their contents. 
> Since the index mutation has a row key, this is straightforward. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5147) Add an option to disable spooling ( SORT MERGE strategy in QueryCompiler )

2019-02-22 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5147:
-
Attachment: PHOENIX-5147.4.x-HBase-1.3.003.patch

> Add an option to disable spooling ( SORT MERGE strategy in QueryCompiler )
> --
>
> Key: PHOENIX-5147
> URL: https://issues.apache.org/jira/browse/PHOENIX-5147
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>    Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-5147.4.x-HBase-1.3.001.patch, 
> PHOENIX-5147.4.x-HBase-1.3.002.patch, PHOENIX-5147.4.x-HBase-1.3.003.patch
>
>
> We should add an option to allow database admin to disable using spooling 
> from the server side. 
> Especially before PHOENIX-5135 is fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5147) Add an option to disable spooling ( SORT MERGE strategy in QueryCompiler )

2019-02-19 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5147:
-
Attachment: PHOENIX-5147.4.x-HBase-1.3.002.patch

> Add an option to disable spooling ( SORT MERGE strategy in QueryCompiler )
> --
>
> Key: PHOENIX-5147
> URL: https://issues.apache.org/jira/browse/PHOENIX-5147
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>    Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-5147.4.x-HBase-1.3.001.patch, 
> PHOENIX-5147.4.x-HBase-1.3.002.patch
>
>
> We should add an option to allow database admin to disable using spooling 
> from the server side. 
> Especially before PHOENIX-5135 is fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5147) Add an option to disable spooling ( SORT MERGE strategy in QueryCompiler )

2019-02-19 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang reassigned PHOENIX-5147:


Assignee: Xu Cang

> Add an option to disable spooling ( SORT MERGE strategy in QueryCompiler )
> --
>
> Key: PHOENIX-5147
> URL: https://issues.apache.org/jira/browse/PHOENIX-5147
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>    Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Major
>
> We should add an option to allow database admin to disable using spooling 
> from the server side. 
> Especially before PHOENIX-5135 is fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5147) Add an option to disable spooling ( SORT MERGE strategy in QueryCompiler )

2019-02-19 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5147:
-
Summary: Add an option to disable spooling ( SORT MERGE strategy in 
QueryCompiler )  (was: Add an option to disable spooling)

> Add an option to disable spooling ( SORT MERGE strategy in QueryCompiler )
> --
>
> Key: PHOENIX-5147
> URL: https://issues.apache.org/jira/browse/PHOENIX-5147
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>    Reporter: Xu Cang
>Priority: Major
>
> We should add an option to allow database admin to disable using spooling 
> from the server side. 
> Especially before PHOENIX-5135 is fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5147) Add an option to disable spooling

2019-02-19 Thread Xu Cang (JIRA)
Xu Cang created PHOENIX-5147:


 Summary: Add an option to disable spooling
 Key: PHOENIX-5147
 URL: https://issues.apache.org/jira/browse/PHOENIX-5147
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.15.0
Reporter: Xu Cang


We should add an option to allow database admin to disable using spooling from 
the server side. 
Especially before PHOENIX-5135 is fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5134) Phoenix Connection Driver #normalize does not distinguish different url with same ZK quorum but different Properties

2019-02-12 Thread Xu Cang (JIRA)
Xu Cang created PHOENIX-5134:


 Summary: Phoenix Connection Driver #normalize does not distinguish 
different url with same ZK quorum but different Properties
 Key: PHOENIX-5134
 URL: https://issues.apache.org/jira/browse/PHOENIX-5134
 Project: Phoenix
  Issue Type: Improvement
Reporter: Xu Cang


In this code
https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDriver.java#L228


Phoenix now uses a cache to maintain Hconnections. The cache's key is generated 
by 'normalize' method here:
https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixEmbeddedDriver.java#L312
The normalize method takes ZK quorum, port, rootNode, principle and keytab into 
account. But not properties passed in in url. 

E.g.
Request to reate one connection by this url: 
jdbc:phoenix:localhost:61733;TenantId=1
Request to create another connection by this url
jdbc:phoenix:localhost:61733;TenantId=2

Based on logic we have, it will result in one same Hconnection in the 
connection cache here. 
This might not be something we really want. 
For example, different tenant wants to have different HBase config (such as 
HBase timeout settings) With the same Hconnection returned, tenant2's config 
will be ignored silently. 






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-1160) Allow an index to be declared as immutable

2019-01-25 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang reassigned PHOENIX-1160:


Assignee: (was: Xu Cang)

> Allow an index to be declared as immutable
> --
>
> Key: PHOENIX-1160
> URL: https://issues.apache.org/jira/browse/PHOENIX-1160
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Priority: Major
> Attachments: PHOENIX-1160.WIP.patch
>
>
> Currently, a table must be marked as immutable, through the 
> IMMUTABLE_ROWS=true property specified at creation time. In this case, all 
> indexes added to the table are immutable, while without this property, all 
> indexes are mutable.
> Instead, we should support a mix of immutable and mutable indexes. We already 
> have an INDEX_TYPE field on our metadata row. We can add a new IMMUTABLE 
> keyword and specify an index is immutable like this:
> {code}
> CREATE IMMUTABLE INDEX foo ON bar(c2, c1);
> {code}
> It would be up to the application developer to ensure that only columns that 
> don't mutate are part of an immutable index (we already rely on this anyway).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5034) Log all critical statements in SYSTEM.LOG table.

2018-12-03 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5034:
-
Attachment: PHOENIX-5034-4.x-HBase-1.3.005.patch

> Log all critical statements in SYSTEM.LOG table.
> 
>
> Key: PHOENIX-5034
> URL: https://issues.apache.org/jira/browse/PHOENIX-5034
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>    Assignee: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-5034-4.x-HBase-1.3.001.patch, 
> PHOENIX-5034-4.x-HBase-1.3.002.patch, PHOENIX-5034-4.x-HBase-1.3.003.patch, 
> PHOENIX-5034-4.x-HBase-1.3.004.patch, PHOENIX-5034-4.x-HBase-1.3.005.patch
>
>
> In production, sometimes engineers see table got dropped unexpectedly. It's 
> not easy to SCAN raw table from HBase itself to understand what happened and 
> when the table get dropped.
> Since we already have SYSTEM.LOG query log facility in Phoenix that sampling 
> query statement (log 1% statement by default). It's good to always log 
> critical statements such as "DROP" or "ALTER" statements.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5034) Log all critical statements in SYSTEM.LOG table.

2018-11-30 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5034:
-
Attachment: PHOENIX-5034-4.x-HBase-1.3.004.patch

> Log all critical statements in SYSTEM.LOG table.
> 
>
> Key: PHOENIX-5034
> URL: https://issues.apache.org/jira/browse/PHOENIX-5034
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>    Assignee: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-5034-4.x-HBase-1.3.001.patch, 
> PHOENIX-5034-4.x-HBase-1.3.002.patch, PHOENIX-5034-4.x-HBase-1.3.003.patch, 
> PHOENIX-5034-4.x-HBase-1.3.004.patch
>
>
> In production, sometimes engineers see table got dropped unexpectedly. It's 
> not easy to SCAN raw table from HBase itself to understand what happened and 
> when the table get dropped.
> Since we already have SYSTEM.LOG query log facility in Phoenix that sampling 
> query statement (log 1% statement by default). It's good to always log 
> critical statements such as "DROP" or "ALTER" statements.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5034) Log all critical statements in SYSTEM.LOG table.

2018-11-30 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5034:
-
Attachment: PHOENIX-5034-4.x-HBase-1.3.003.patch

> Log all critical statements in SYSTEM.LOG table.
> 
>
> Key: PHOENIX-5034
> URL: https://issues.apache.org/jira/browse/PHOENIX-5034
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>    Assignee: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-5034-4.x-HBase-1.3.001.patch, 
> PHOENIX-5034-4.x-HBase-1.3.002.patch, PHOENIX-5034-4.x-HBase-1.3.003.patch
>
>
> In production, sometimes engineers see table got dropped unexpectedly. It's 
> not easy to SCAN raw table from HBase itself to understand what happened and 
> when the table get dropped.
> Since we already have SYSTEM.LOG query log facility in Phoenix that sampling 
> query statement (log 1% statement by default). It's good to always log 
> critical statements such as "DROP" or "ALTER" statements.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5033) connect() method in PhoenixDriver should catch exception properly

2018-11-30 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5033:
-
Description: 
See this error in production:

 

Problem executing query. *Stack trace: java.lang.IllegalMonitorStateException: 
attempt to unlock read lock, not locked by current thread*

at 
java.util.concurrent.locks.ReentrantReadWriteLock$Sync.unmatchedUnlockException(ReentrantReadWriteLock.java:444)

at 
java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:428)

at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1341)

at 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:881)

at org.apache.phoenix.jdbc.PhoenixDriver.unlock(PhoenixDriver.java:346)

at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:223)

at 
phoenix.connection.ProtectedPhoenixConnectionFactory$PhoenixConnectionFactory.createPhoenixConnection(ProtectedPhoenixConnectionFactory.java:233)

at 
phoenix.connection.ProtectedPhoenixConnectionFactory.create(ProtectedPhoenixConnectionFactory.java:95)

at 
phoenix.util.PhoenixConnectionUtil.getConnection(PhoenixConnectionUtil.java:59)

at 
phoenix.util.PhoenixConnectionUtil.getConnection(PhoenixConnectionUtil.java:48)

 

 

Questionable code:

 

 
 # @Override
 # public Connection connect(String url, Properties info) throws SQLException {
 # if (!acceptsURL(url)) \{ # return null; # }
 # try \{ # lockInterruptibly(LockMode.READ); # checkClosed(); # return 
createConnection(url, info); # }finally
{ # unlock(LockMode.READ); # }
 # }
 #  
 #  

  was:
See this error in production:

 

Problem executing query. *Stack trace: java.lang.IllegalMonitorStateException: 
attempt to unlock read lock, not locked by current thread*

at 
java.util.concurrent.locks.ReentrantReadWriteLock$Sync.unmatchedUnlockException(ReentrantReadWriteLock.java:444)

at 
java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:428)

at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1341)

at 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:881)

at org.apache.phoenix.jdbc.PhoenixDriver.unlock(PhoenixDriver.java:346)

at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:223)

at 
phoenix.connection.ProtectedPhoenixConnectionFactory$PhoenixConnectionFactory.createPhoenixConnection(ProtectedPhoenixConnectionFactory.java:233)

at 
phoenix.connection.ProtectedPhoenixConnectionFactory.create(ProtectedPhoenixConnectionFactory.java:95)

at 
phoenix.util.PhoenixConnectionUtil.getConnection(PhoenixConnectionUtil.java:59)

at 
phoenix.util.PhoenixConnectionUtil.getConnection(PhoenixConnectionUtil.java:48)

at 
pliny.db.PhoenixConnectionProviderImpl$ConnectionType$1.getConnection(PhoenixConnectionProviderImpl.java:158)

at 
pliny.db.PhoenixConnectionProviderImpl.getGenericConnection(PhoenixConnectionProviderImpl.java:67)

at 
communities.util.db.phoenix.ManagedPhoenixConnection.createManagedGenericConnection(ManagedPhoenixConnection.java:73)

at 
communities.util.db.phoenix.ManagedPhoenixConnection.getGenericConnectionForAsyncOperation(ManagedPhoenixConnection.java:51)

at 
communities.util.db.phoenix.AbstractAsyncPhoenixRequest.call(AbstractAsyncPhoenixRequest.java:183)

at 
core.chatter.feeds.read.FeedEntityReadByUserPhoenixQuery.call(FeedEntityReadByUserPhoenixQuery.java:66)

at 
communities.util.db.phoenix.AbstractAsyncPhoenixRequest.call(AbstractAsyncPhoenixRequest.java:1)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

 

 

Questionable code:

 

 
 # @Override
 # public Connection connect(String url, Properties info) throws SQLException {
 # if (!acceptsURL(url)) {
 # return null;
 # }
 # try {
 #  lockInterruptibly(LockMode.READ);
 #  checkClosed();
 # return createConnection(url, info);
 # } finally {
 #  unlock(LockMode.READ);
 # }
 # }
 #  
 #  


> connect() method in PhoenixDriver should catch exception properly
> -
>
> Key: PHOENIX-5033
> URL: https://issues.apache.org/jira/browse/PHOENIX-5033
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>    Reporter: Xu Cang
>Priority: Minor
>
> See this error in production:
>  
> Problem executing query. *Stack trace: 
> java.lang.IllegalMonitorStateException: attempt to unlock read lock, not 
> locked by current thread*
> at 

[jira] [Updated] (PHOENIX-5034) Log all critical statements in SYSTEM.LOG table.

2018-11-30 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-5034:
-
Attachment: PHOENIX-5034-4.x-HBase-1.3.002.patch

> Log all critical statements in SYSTEM.LOG table.
> 
>
> Key: PHOENIX-5034
> URL: https://issues.apache.org/jira/browse/PHOENIX-5034
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>    Assignee: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-5034-4.x-HBase-1.3.001.patch, 
> PHOENIX-5034-4.x-HBase-1.3.002.patch
>
>
> In production, sometimes engineers see table got dropped unexpectedly. It's 
> not easy to SCAN raw table from HBase itself to understand what happened and 
> when the table get dropped.
> Since we already have SYSTEM.LOG query log facility in Phoenix that sampling 
> query statement (log 1% statement by default). It's good to always log 
> critical statements such as "DROP" or "ALTER" statements.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5034) Log all critical statements in SYSTEM.LOG table.

2018-11-20 Thread Xu Cang (JIRA)
Xu Cang created PHOENIX-5034:


 Summary: Log all critical statements in SYSTEM.LOG table.
 Key: PHOENIX-5034
 URL: https://issues.apache.org/jira/browse/PHOENIX-5034
 Project: Phoenix
  Issue Type: Improvement
Reporter: Xu Cang
Assignee: Xu Cang


In production, sometimes engineers see table got dropped unexpectedly. It's not 
easy to SCAN raw table from HBase itself to understand what happened and when 
the table get dropped.

Since we already have SYSTEM.LOG query log facility in Phoenix that sampling 
query statement (log 1% statement by default). It's good to always log critical 
statements such as "DROP" or "ALTER" statements.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5033) connect() method in PhoenixDriver should catch exception properly

2018-11-20 Thread Xu Cang (JIRA)
Xu Cang created PHOENIX-5033:


 Summary: connect() method in PhoenixDriver should catch exception 
properly
 Key: PHOENIX-5033
 URL: https://issues.apache.org/jira/browse/PHOENIX-5033
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.13.0
Reporter: Xu Cang


See this error in production:

 

Problem executing query. *Stack trace: java.lang.IllegalMonitorStateException: 
attempt to unlock read lock, not locked by current thread*

at 
java.util.concurrent.locks.ReentrantReadWriteLock$Sync.unmatchedUnlockException(ReentrantReadWriteLock.java:444)

at 
java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:428)

at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1341)

at 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:881)

at org.apache.phoenix.jdbc.PhoenixDriver.unlock(PhoenixDriver.java:346)

at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:223)

at 
phoenix.connection.ProtectedPhoenixConnectionFactory$PhoenixConnectionFactory.createPhoenixConnection(ProtectedPhoenixConnectionFactory.java:233)

at 
phoenix.connection.ProtectedPhoenixConnectionFactory.create(ProtectedPhoenixConnectionFactory.java:95)

at 
phoenix.util.PhoenixConnectionUtil.getConnection(PhoenixConnectionUtil.java:59)

at 
phoenix.util.PhoenixConnectionUtil.getConnection(PhoenixConnectionUtil.java:48)

at 
pliny.db.PhoenixConnectionProviderImpl$ConnectionType$1.getConnection(PhoenixConnectionProviderImpl.java:158)

at 
pliny.db.PhoenixConnectionProviderImpl.getGenericConnection(PhoenixConnectionProviderImpl.java:67)

at 
communities.util.db.phoenix.ManagedPhoenixConnection.createManagedGenericConnection(ManagedPhoenixConnection.java:73)

at 
communities.util.db.phoenix.ManagedPhoenixConnection.getGenericConnectionForAsyncOperation(ManagedPhoenixConnection.java:51)

at 
communities.util.db.phoenix.AbstractAsyncPhoenixRequest.call(AbstractAsyncPhoenixRequest.java:183)

at 
core.chatter.feeds.read.FeedEntityReadByUserPhoenixQuery.call(FeedEntityReadByUserPhoenixQuery.java:66)

at 
communities.util.db.phoenix.AbstractAsyncPhoenixRequest.call(AbstractAsyncPhoenixRequest.java:1)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

 

 

Questionable code:

 

 
 # @Override
 # public Connection connect(String url, Properties info) throws SQLException {
 # if (!acceptsURL(url)) {
 # return null;
 # }
 # try {
 #  lockInterruptibly(LockMode.READ);
 #  checkClosed();
 # return createConnection(url, info);
 # } finally {
 #  unlock(LockMode.READ);
 # }
 # }
 #  
 #  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4830) order by primary key desc return wrong results

2018-11-13 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4830:
-
Attachment: PHOENIX-4830-4.x-HBase-1.3.008.patch

> order by primary key desc return wrong results
> --
>
> Key: PHOENIX-4830
> URL: https://issues.apache.org/jira/browse/PHOENIX-4830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: phoenix-4.14-hbase-1.2
>Reporter: JieChen
>Assignee: Xu Cang
>Priority: Major
>  Labels: DESC
> Attachments: PHOENIX-4830-4.x-HBase-1.3.001.patch, 
> PHOENIX-4830-4.x-HBase-1.3.002.patch, PHOENIX-4830-4.x-HBase-1.3.003.patch, 
> PHOENIX-4830-4.x-HBase-1.3.004.patch, PHOENIX-4830-4.x-HBase-1.3.005.patch, 
> PHOENIX-4830-4.x-HBase-1.3.006.patch, PHOENIX-4830-4.x-HBase-1.3.007.patch, 
> PHOENIX-4830-4.x-HBase-1.3.007.patch, PHOENIX-4830-4.x-HBase-1.3.008.patch
>
>
> {code:java}
> 0: jdbc:phoenix:localhost>  create table test(id bigint not null primary key, 
> a bigint);
> No rows affected (1.242 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(1,11);
> 1 row affected (0.01 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(2,22);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(3,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from test;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 1   | 11  |
> | 2   | 22  |
> | 3   | 33  |
> +-+-+
> 3 rows selected (0.015 seconds)
> 0: jdbc:phoenix:localhost> select * from test order by id desc limit 2 offset 
> 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 3   | 33  |
> | 2   | 22  |
> +-+-+
> 2 rows selected (0.018 seconds)
> 0: jdbc:phoenix:localhost> select * from test where id in (select id from 
> test ) order by id desc limit 2 offset 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 2   | 22  |
> | 1   | 11  |
> +-+-+
> wrong results. 
> {code}
> there may be some errors. ScanUtil.setupReverseScan code.
>  then
> {code:java}
> 0: jdbc:phoenix:localhost> upsert into test values(4,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(5,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(6,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(7,33);
> 1 row affected (0.006 seconds)
> {code}
> execute sql
> {code:java}
> select * from test where id in (select id from test where a=33) order by id 
> desc;
> {code}
> throw exception
> {code:java}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1533266754845.b8e521d4dc8e8b8f18c69cc7ef76973d.: The next hint must 
> come after previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:264)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0

[jira] [Updated] (PHOENIX-4830) order by primary key desc return wrong results

2018-10-31 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4830:
-
Attachment: PHOENIX-4830-4.x-HBase-1.3.007.patch

> order by primary key desc return wrong results
> --
>
> Key: PHOENIX-4830
> URL: https://issues.apache.org/jira/browse/PHOENIX-4830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: phoenix-4.14-hbase-1.2
>Reporter: JieChen
>Assignee: Xu Cang
>Priority: Major
>  Labels: DESC
> Attachments: PHOENIX-4830-4.x-HBase-1.3.001.patch, 
> PHOENIX-4830-4.x-HBase-1.3.002.patch, PHOENIX-4830-4.x-HBase-1.3.003.patch, 
> PHOENIX-4830-4.x-HBase-1.3.004.patch, PHOENIX-4830-4.x-HBase-1.3.005.patch, 
> PHOENIX-4830-4.x-HBase-1.3.006.patch, PHOENIX-4830-4.x-HBase-1.3.007.patch, 
> PHOENIX-4830-4.x-HBase-1.3.007.patch
>
>
> {code:java}
> 0: jdbc:phoenix:localhost>  create table test(id bigint not null primary key, 
> a bigint);
> No rows affected (1.242 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(1,11);
> 1 row affected (0.01 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(2,22);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(3,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from test;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 1   | 11  |
> | 2   | 22  |
> | 3   | 33  |
> +-+-+
> 3 rows selected (0.015 seconds)
> 0: jdbc:phoenix:localhost> select * from test order by id desc limit 2 offset 
> 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 3   | 33  |
> | 2   | 22  |
> +-+-+
> 2 rows selected (0.018 seconds)
> 0: jdbc:phoenix:localhost> select * from test where id in (select id from 
> test ) order by id desc limit 2 offset 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 2   | 22  |
> | 1   | 11  |
> +-+-+
> wrong results. 
> {code}
> there may be some errors. ScanUtil.setupReverseScan code.
>  then
> {code:java}
> 0: jdbc:phoenix:localhost> upsert into test values(4,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(5,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(6,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(7,33);
> 1 row affected (0.006 seconds)
> {code}
> execute sql
> {code:java}
> select * from test where id in (select id from test where a=33) order by id 
> desc;
> {code}
> throw exception
> {code:java}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1533266754845.b8e521d4dc8e8b8f18c69cc7ef76973d.: The next hint must 
> come after previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:264)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0

[jira] [Updated] (PHOENIX-4830) order by primary key desc return wrong results

2018-10-30 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4830:
-
Attachment: PHOENIX-4830-4.x-HBase-1.3.007.patch

> order by primary key desc return wrong results
> --
>
> Key: PHOENIX-4830
> URL: https://issues.apache.org/jira/browse/PHOENIX-4830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: phoenix-4.14-hbase-1.2
>Reporter: JieChen
>Assignee: Xu Cang
>Priority: Major
>  Labels: DESC
> Attachments: PHOENIX-4830-4.x-HBase-1.3.001.patch, 
> PHOENIX-4830-4.x-HBase-1.3.002.patch, PHOENIX-4830-4.x-HBase-1.3.003.patch, 
> PHOENIX-4830-4.x-HBase-1.3.004.patch, PHOENIX-4830-4.x-HBase-1.3.005.patch, 
> PHOENIX-4830-4.x-HBase-1.3.006.patch, PHOENIX-4830-4.x-HBase-1.3.007.patch
>
>
> {code:java}
> 0: jdbc:phoenix:localhost>  create table test(id bigint not null primary key, 
> a bigint);
> No rows affected (1.242 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(1,11);
> 1 row affected (0.01 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(2,22);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(3,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from test;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 1   | 11  |
> | 2   | 22  |
> | 3   | 33  |
> +-+-+
> 3 rows selected (0.015 seconds)
> 0: jdbc:phoenix:localhost> select * from test order by id desc limit 2 offset 
> 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 3   | 33  |
> | 2   | 22  |
> +-+-+
> 2 rows selected (0.018 seconds)
> 0: jdbc:phoenix:localhost> select * from test where id in (select id from 
> test ) order by id desc limit 2 offset 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 2   | 22  |
> | 1   | 11  |
> +-+-+
> wrong results. 
> {code}
> there may be some errors. ScanUtil.setupReverseScan code.
>  then
> {code:java}
> 0: jdbc:phoenix:localhost> upsert into test values(4,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(5,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(6,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(7,33);
> 1 row affected (0.006 seconds)
> {code}
> execute sql
> {code:java}
> select * from test where id in (select id from test where a=33) order by id 
> desc;
> {code}
> throw exception
> {code:java}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1533266754845.b8e521d4dc8e8b8f18c69cc7ef76973d.: The next hint must 
> come after previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:264)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Ma

[jira] [Updated] (PHOENIX-4918) Apache Phoenix website Grammar page is running on an very old version

2018-09-22 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4918:
-
Description: 
For example this query example is incorrect: CREATE TABLE my_schema.my_table ( 
id BIGINT not null primary key, date)

 

[https://phoenix.apache.org/language/index.html]

I checked the master branch and 4.x branch, the code is correct though. Meaning 
the website is using a very old version of phoenix.csv.

Any plan to update it? thanks.

 FYI [~karanmehta93]   @Thomas

 

  was:
For example this query example is incorrect: CREATE TABLE my_schema.my_table ( 
id BIGINT not null primary key, date)

 

[https://phoenix.apache.org/language/index.html]

I checked the master branch and 4.x branch, the code is correct though. Meaning 
the website is using a very old version of phoenix.csv.

Any plan to update it? thanks.

 

[~karanmehta93] 

 


> Apache Phoenix website Grammar page is running on an very old version
> -
>
> Key: PHOENIX-4918
> URL: https://issues.apache.org/jira/browse/PHOENIX-4918
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Xu Cang
>Priority: Trivial
>
> For example this query example is incorrect: CREATE TABLE my_schema.my_table 
> ( id BIGINT not null primary key, date)
>  
> [https://phoenix.apache.org/language/index.html]
> I checked the master branch and 4.x branch, the code is correct though. 
> Meaning the website is using a very old version of phoenix.csv.
> Any plan to update it? thanks.
>  FYI [~karanmehta93]   @Thomas
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4918) Apache Phoenix website Grammar page is running on an very old version

2018-09-22 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4918:
-
Description: 
For example this query example is incorrect: CREATE TABLE my_schema.my_table ( 
id BIGINT not null primary key, date)

 

[https://phoenix.apache.org/language/index.html]

I checked the master branch and 4.x branch, the code is correct though. Meaning 
the website is using a very old version of phoenix.csv.

Any plan to update it? thanks.

 FYI [~karanmehta93]  

 

  was:
For example this query example is incorrect: CREATE TABLE my_schema.my_table ( 
id BIGINT not null primary key, date)

 

[https://phoenix.apache.org/language/index.html]

I checked the master branch and 4.x branch, the code is correct though. Meaning 
the website is using a very old version of phoenix.csv.

Any plan to update it? thanks.

 FYI [~karanmehta93]   @Thomas

 


> Apache Phoenix website Grammar page is running on an very old version
> -
>
> Key: PHOENIX-4918
> URL: https://issues.apache.org/jira/browse/PHOENIX-4918
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Xu Cang
>Priority: Trivial
>
> For example this query example is incorrect: CREATE TABLE my_schema.my_table 
> ( id BIGINT not null primary key, date)
>  
> [https://phoenix.apache.org/language/index.html]
> I checked the master branch and 4.x branch, the code is correct though. 
> Meaning the website is using a very old version of phoenix.csv.
> Any plan to update it? thanks.
>  FYI [~karanmehta93]  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4918) Apache Phoenix website Grammar page is running on an old version

2018-09-22 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4918:
-
Summary: Apache Phoenix website Grammar page is running on an old version  
(was: Apache Phoenix website Grammar page is running on an very old version)

> Apache Phoenix website Grammar page is running on an old version
> 
>
> Key: PHOENIX-4918
> URL: https://issues.apache.org/jira/browse/PHOENIX-4918
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Xu Cang
>Priority: Trivial
>
> For example this query example is incorrect: CREATE TABLE my_schema.my_table 
> ( id BIGINT not null primary key, date)
>  
> [https://phoenix.apache.org/language/index.html]
> I checked the master branch and 4.x branch, the code is correct though. 
> Meaning the website is using a very old version of phoenix.csv.
> Any plan to update it? thanks.
>  FYI [~karanmehta93]  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4918) Apache Phoenix website Grammar page is running on an very old version

2018-09-22 Thread Xu Cang (JIRA)
Xu Cang created PHOENIX-4918:


 Summary: Apache Phoenix website Grammar page is running on an very 
old version
 Key: PHOENIX-4918
 URL: https://issues.apache.org/jira/browse/PHOENIX-4918
 Project: Phoenix
  Issue Type: Bug
Reporter: Xu Cang


For example this query example is incorrect: CREATE TABLE my_schema.my_table ( 
id BIGINT not null primary key, date)

 

[https://phoenix.apache.org/language/index.html]

I checked the master branch and 4.x branch, the code is correct though. Meaning 
the website is using a very old version of phoenix.csv.

Any plan to update it? thanks.

 

[~karanmehta93] 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1160) Allow an index to be declared as immutable

2018-08-29 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-1160:
-
Attachment: PHOENIX-1160.WIP.patch

> Allow an index to be declared as immutable
> --
>
> Key: PHOENIX-1160
> URL: https://issues.apache.org/jira/browse/PHOENIX-1160
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>    Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-1160.WIP.patch
>
>
> Currently, a table must be marked as immutable, through the 
> IMMUTABLE_ROWS=true property specified at creation time. In this case, all 
> indexes added to the table are immutable, while without this property, all 
> indexes are mutable.
> Instead, we should support a mix of immutable and mutable indexes. We already 
> have an INDEX_TYPE field on our metadata row. We can add a new IMMUTABLE 
> keyword and specify an index is immutable like this:
> {code}
> CREATE IMMUTABLE INDEX foo ON bar(c2, c1);
> {code}
> It would be up to the application developer to ensure that only columns that 
> don't mutate are part of an immutable index (we already rely on this anyway).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4830) order by primary key desc return wrong results

2018-08-21 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4830:
-
Attachment: PHOENIX-4830-4.x-HBase-1.3.006.patch

> order by primary key desc return wrong results
> --
>
> Key: PHOENIX-4830
> URL: https://issues.apache.org/jira/browse/PHOENIX-4830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: phoenix-4.14-hbase-1.2
>Reporter: JieChen
>Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-4830-4.x-HBase-1.3.001.patch, 
> PHOENIX-4830-4.x-HBase-1.3.002.patch, PHOENIX-4830-4.x-HBase-1.3.003.patch, 
> PHOENIX-4830-4.x-HBase-1.3.004.patch, PHOENIX-4830-4.x-HBase-1.3.005.patch, 
> PHOENIX-4830-4.x-HBase-1.3.006.patch
>
>
> {code:java}
> 0: jdbc:phoenix:localhost>  create table test(id bigint not null primary key, 
> a bigint);
> No rows affected (1.242 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(1,11);
> 1 row affected (0.01 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(2,22);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(3,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from test;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 1   | 11  |
> | 2   | 22  |
> | 3   | 33  |
> +-+-+
> 3 rows selected (0.015 seconds)
> 0: jdbc:phoenix:localhost> select * from test order by id desc limit 2 offset 
> 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 3   | 33  |
> | 2   | 22  |
> +-+-+
> 2 rows selected (0.018 seconds)
> 0: jdbc:phoenix:localhost> select * from test where id in (select id from 
> test ) order by id desc limit 2 offset 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 2   | 22  |
> | 1   | 11  |
> +-+-+
> wrong results. 
> {code}
> there may be some errors. ScanUtil.setupReverseScan code.
>  then
> {code:java}
> 0: jdbc:phoenix:localhost> upsert into test values(4,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(5,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(6,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(7,33);
> 1 row affected (0.006 seconds)
> {code}
> execute sql
> {code:java}
> select * from test where id in (select id from test where a=33) order by id 
> desc;
> {code}
> throw exception
> {code:java}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1533266754845.b8e521d4dc8e8b8f18c69cc7ef76973d.: The next hint must 
> come after previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:264)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  

[jira] [Assigned] (PHOENIX-1160) Allow an index to be declared as immutable

2018-08-14 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang reassigned PHOENIX-1160:


Assignee: Xu Cang

> Allow an index to be declared as immutable
> --
>
> Key: PHOENIX-1160
> URL: https://issues.apache.org/jira/browse/PHOENIX-1160
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>    Assignee: Xu Cang
>Priority: Major
>
> Currently, a table must be marked as immutable, through the 
> IMMUTABLE_ROWS=true property specified at creation time. In this case, all 
> indexes added to the table are immutable, while without this property, all 
> indexes are mutable.
> Instead, we should support a mix of immutable and mutable indexes. We already 
> have an INDEX_TYPE field on our metadata row. We can add a new IMMUTABLE 
> keyword and specify an index is immutable like this:
> {code}
> CREATE IMMUTABLE INDEX foo ON bar(c2, c1);
> {code}
> It would be up to the application developer to ensure that only columns that 
> don't mutate are part of an immutable index (we already rely on this anyway).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4612) Index immutability doesn't change when data table immutable changes

2018-08-14 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4612:
-
Attachment: PHOENIX-4612-4.x-HBase-1.3.001.patch

> Index immutability doesn't change when data table immutable changes
> ---
>
> Key: PHOENIX-4612
> URL: https://issues.apache.org/jira/browse/PHOENIX-4612
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
> Attachments: PHOENIX-4612-4.x-HBase-1.3.001.patch
>
>
> The immutability of an index should change when the data table immutable 
> changes. Probably best to not allow table immutability to change as part of 
> PHOENIX-1160.
> Here's a test that currently fails:
> {code}
> private static void assertImmutability(Connection conn, String tableName, 
> boolean expectedImmutableRows) throws Exception {
> ResultSet rs = conn.createStatement().executeQuery("SELECT /*+ 
> NO_INDEX */ v FROM " + tableName);
> rs.next();
> PTable table = 
> conn.unwrap(PhoenixConnection.class).getMetaDataCache().getTableRef(new 
> PTableKey(null, tableName)).getTable();
> assertEquals(expectedImmutableRows, table.isImmutableRows());
> PhoenixStatement stmt = 
> conn.createStatement().unwrap(PhoenixStatement.class);
> rs = stmt.executeQuery("SELECT v FROM " + tableName);
> rs.next();
> assertTrue(stmt.getQueryPlan().getTableRef().getTable().getType() == 
> PTableType.INDEX);
> table = 
> conn.unwrap(PhoenixConnection.class).getMetaDataCache().getTableRef(new 
> PTableKey(null, tableName)).getTable();
> assertEquals(expectedImmutableRows, table.isImmutableRows());
> for (PTable index : table.getIndexes()) {
> assertEquals(expectedImmutableRows, index.isImmutableRows());
> }
> }
> 
> @Test
> public void testIndexImmutabilityChangesWithTable() throws Exception {
> Connection conn = DriverManager.getConnection(getUrl());
> String tableName = generateUniqueName();
> String indexName = generateUniqueName();
> conn.createStatement().execute("CREATE IMMUTABLE TABLE " + tableName 
> + "(k VARCHAR PRIMARY KEY, v VARCHAR) COLUMN_ENCODED_BYTES=NONE, 
> IMMUTABLE_STORAGE_SCHEME = ONE_CELL_PER_COLUMN");
> conn.createStatement().execute("CREATE INDEX " + indexName + " ON " + 
> tableName + "(v)");
> assertImmutability(conn, tableName, true);
> conn.createStatement().execute("ALTER TABLE " + tableName + " SET 
> IMMUTABLE_ROWS=false");
> assertImmutability(conn, tableName, false);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4830) order by primary key desc return wrong results

2018-08-13 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4830:
-
Attachment: PHOENIX-4830-4.x-HBase-1.3.005.patch

> order by primary key desc return wrong results
> --
>
> Key: PHOENIX-4830
> URL: https://issues.apache.org/jira/browse/PHOENIX-4830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: phoenix-4.14-hbase-1.2
>Reporter: JieChen
>Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-4830-4.x-HBase-1.3.001.patch, 
> PHOENIX-4830-4.x-HBase-1.3.002.patch, PHOENIX-4830-4.x-HBase-1.3.003.patch, 
> PHOENIX-4830-4.x-HBase-1.3.004.patch, PHOENIX-4830-4.x-HBase-1.3.005.patch
>
>
> {code:java}
> 0: jdbc:phoenix:localhost>  create table test(id bigint not null primary key, 
> a bigint);
> No rows affected (1.242 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(1,11);
> 1 row affected (0.01 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(2,22);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(3,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from test;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 1   | 11  |
> | 2   | 22  |
> | 3   | 33  |
> +-+-+
> 3 rows selected (0.015 seconds)
> 0: jdbc:phoenix:localhost> select * from test order by id desc limit 2 offset 
> 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 3   | 33  |
> | 2   | 22  |
> +-+-+
> 2 rows selected (0.018 seconds)
> 0: jdbc:phoenix:localhost> select * from test where id in (select id from 
> test ) order by id desc limit 2 offset 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 2   | 22  |
> | 1   | 11  |
> +-+-+
> wrong results. 
> {code}
> there may be some errors. ScanUtil.setupReverseScan code.
>  then
> {code:java}
> 0: jdbc:phoenix:localhost> upsert into test values(4,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(5,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(6,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(7,33);
> 1 row affected (0.006 seconds)
> {code}
> execute sql
> {code:java}
> select * from test where id in (select id from test where a=33) order by id 
> desc;
> {code}
> throw exception
> {code:java}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1533266754845.b8e521d4dc8e8b8f18c69cc7ef76973d.: The next hint must 
> come after previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:264)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqi

[jira] [Updated] (PHOENIX-4830) order by primary key desc return wrong results

2018-08-12 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4830:
-
Attachment: PHOENIX-4830-4.x-HBase-1.3.004.patch

> order by primary key desc return wrong results
> --
>
> Key: PHOENIX-4830
> URL: https://issues.apache.org/jira/browse/PHOENIX-4830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: phoenix-4.14-hbase-1.2
>Reporter: JieChen
>Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-4830-4.x-HBase-1.3.001.patch, 
> PHOENIX-4830-4.x-HBase-1.3.002.patch, PHOENIX-4830-4.x-HBase-1.3.003.patch, 
> PHOENIX-4830-4.x-HBase-1.3.004.patch
>
>
> {code:java}
> 0: jdbc:phoenix:localhost>  create table test(id bigint not null primary key, 
> a bigint);
> No rows affected (1.242 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(1,11);
> 1 row affected (0.01 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(2,22);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(3,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from test;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 1   | 11  |
> | 2   | 22  |
> | 3   | 33  |
> +-+-+
> 3 rows selected (0.015 seconds)
> 0: jdbc:phoenix:localhost> select * from test order by id desc limit 2 offset 
> 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 3   | 33  |
> | 2   | 22  |
> +-+-+
> 2 rows selected (0.018 seconds)
> 0: jdbc:phoenix:localhost> select * from test where id in (select id from 
> test ) order by id desc limit 2 offset 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 2   | 22  |
> | 1   | 11  |
> +-+-+
> wrong results. 
> {code}
> there may be some errors. ScanUtil.setupReverseScan code.
>  then
> {code:java}
> 0: jdbc:phoenix:localhost> upsert into test values(4,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(5,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(6,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(7,33);
> 1 row affected (0.006 seconds)
> {code}
> execute sql
> {code:java}
> select * from test where id in (select id from test where a=33) order by id 
> desc;
> {code}
> throw exception
> {code:java}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1533266754845.b8e521d4dc8e8b8f18c69cc7ef76973d.: The next hint must 
> come after previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:264)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqi

[jira] [Updated] (PHOENIX-4830) order by primary key desc return wrong results

2018-08-11 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4830:
-
Attachment: PHOENIX-4830-4.x-HBase-1.3.003.patch

> order by primary key desc return wrong results
> --
>
> Key: PHOENIX-4830
> URL: https://issues.apache.org/jira/browse/PHOENIX-4830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: phoenix-4.14-hbase-1.2
>Reporter: JieChen
>Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-4830-4.x-HBase-1.3.001.patch, 
> PHOENIX-4830-4.x-HBase-1.3.002.patch, PHOENIX-4830-4.x-HBase-1.3.003.patch
>
>
> {code:java}
> 0: jdbc:phoenix:localhost>  create table test(id bigint not null primary key, 
> a bigint);
> No rows affected (1.242 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(1,11);
> 1 row affected (0.01 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(2,22);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(3,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from test;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 1   | 11  |
> | 2   | 22  |
> | 3   | 33  |
> +-+-+
> 3 rows selected (0.015 seconds)
> 0: jdbc:phoenix:localhost> select * from test order by id desc limit 2 offset 
> 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 3   | 33  |
> | 2   | 22  |
> +-+-+
> 2 rows selected (0.018 seconds)
> 0: jdbc:phoenix:localhost> select * from test where id in (select id from 
> test ) order by id desc limit 2 offset 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 2   | 22  |
> | 1   | 11  |
> +-+-+
> wrong results. 
> {code}
> there may be some errors. ScanUtil.setupReverseScan code.
>  then
> {code:java}
> 0: jdbc:phoenix:localhost> upsert into test values(4,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(5,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(6,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(7,33);
> 1 row affected (0.006 seconds)
> {code}
> execute sql
> {code:java}
> select * from test where id in (select id from test where a=33) order by id 
> desc;
> {code}
> throw exception
> {code:java}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1533266754845.b8e521d4dc8e8b8f18c69cc7ef76973d.: The next hint must 
> come after previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:264)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilte

[jira] [Updated] (PHOENIX-4830) order by primary key desc return wrong results

2018-08-10 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4830:
-
Attachment: PHOENIX-4830-4.x-HBase-1.3.002.patch

> order by primary key desc return wrong results
> --
>
> Key: PHOENIX-4830
> URL: https://issues.apache.org/jira/browse/PHOENIX-4830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: phoenix-4.14-hbase-1.2
>Reporter: JieChen
>Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-4830-4.x-HBase-1.3.001.patch, 
> PHOENIX-4830-4.x-HBase-1.3.002.patch
>
>
> {code:java}
> 0: jdbc:phoenix:localhost>  create table test(id bigint not null primary key, 
> a bigint);
> No rows affected (1.242 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(1,11);
> 1 row affected (0.01 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(2,22);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(3,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from test;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 1   | 11  |
> | 2   | 22  |
> | 3   | 33  |
> +-+-+
> 3 rows selected (0.015 seconds)
> 0: jdbc:phoenix:localhost> select * from test order by id desc limit 2 offset 
> 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 3   | 33  |
> | 2   | 22  |
> +-+-+
> 2 rows selected (0.018 seconds)
> 0: jdbc:phoenix:localhost> select * from test where id in (select id from 
> test ) order by id desc limit 2 offset 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 2   | 22  |
> | 1   | 11  |
> +-+-+
> wrong results. 
> {code}
> there may be some errors. ScanUtil.setupReverseScan code.
>  then
> {code:java}
> 0: jdbc:phoenix:localhost> upsert into test values(4,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(5,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(6,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(7,33);
> 1 row affected (0.006 seconds)
> {code}
> execute sql
> {code:java}
> select * from test where id in (select id from test where a=33) order by id 
> desc;
> {code}
> throw exception
> {code:java}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1533266754845.b8e521d4dc8e8b8f18c69cc7ef76973d.: The next hint must 
> come after previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:264)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilte

[jira] [Updated] (PHOENIX-4830) order by primary key desc return wrong results

2018-08-10 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4830:
-
Attachment: PHOENIX-4830-4.x-HBase-1.3.001.patch

> order by primary key desc return wrong results
> --
>
> Key: PHOENIX-4830
> URL: https://issues.apache.org/jira/browse/PHOENIX-4830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: phoenix-4.14-hbase-1.2
>Reporter: JieChen
>Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-4830-4.x-HBase-1.3.001.patch
>
>
> {code:java}
> 0: jdbc:phoenix:localhost>  create table test(id bigint not null primary key, 
> a bigint);
> No rows affected (1.242 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(1,11);
> 1 row affected (0.01 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(2,22);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(3,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from test;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 1   | 11  |
> | 2   | 22  |
> | 3   | 33  |
> +-+-+
> 3 rows selected (0.015 seconds)
> 0: jdbc:phoenix:localhost> select * from test order by id desc limit 2 offset 
> 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 3   | 33  |
> | 2   | 22  |
> +-+-+
> 2 rows selected (0.018 seconds)
> 0: jdbc:phoenix:localhost> select * from test where id in (select id from 
> test ) order by id desc limit 2 offset 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 2   | 22  |
> | 1   | 11  |
> +-+-+
> wrong results. 
> {code}
> there may be some errors. ScanUtil.setupReverseScan code.
>  then
> {code:java}
> 0: jdbc:phoenix:localhost> upsert into test values(4,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(5,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(6,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(7,33);
> 1 row affected (0.006 seconds)
> {code}
> execute sql
> {code:java}
> select * from test where id in (select id from test where a=33) order by id 
> desc;
> {code}
> throw exception
> {code:java}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1533266754845.b8e521d4dc8e8b8f18c69cc7ef76973d.: The next hint must 
> come after previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:264)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
> at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScan

[jira] [Assigned] (PHOENIX-4830) order by primary key desc return wrong results

2018-08-09 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang reassigned PHOENIX-4830:


Assignee: Xu Cang

> order by primary key desc return wrong results
> --
>
> Key: PHOENIX-4830
> URL: https://issues.apache.org/jira/browse/PHOENIX-4830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: phoenix-4.14-hbase-1.2
>Reporter: JieChen
>Assignee: Xu Cang
>Priority: Major
>
> {code:java}
> 0: jdbc:phoenix:localhost>  create table test(id bigint not null primary key, 
> a bigint);
> No rows affected (1.242 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(1,11);
> 1 row affected (0.01 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(2,22);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(3,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from test;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 1   | 11  |
> | 2   | 22  |
> | 3   | 33  |
> +-+-+
> 3 rows selected (0.015 seconds)
> 0: jdbc:phoenix:localhost> select * from test order by id desc limit 2 offset 
> 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 3   | 33  |
> | 2   | 22  |
> +-+-+
> 2 rows selected (0.018 seconds)
> 0: jdbc:phoenix:localhost> select * from test where id in (select id from 
> test ) order by id desc limit 2 offset 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 2   | 22  |
> | 1   | 11  |
> +-+-+
> wrong results. 
> {code}
> there may be some errors. ScanUtil.setupReverseScan code.
>  then
> {code:java}
> 0: jdbc:phoenix:localhost> upsert into test values(4,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(5,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(6,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(7,33);
> 1 row affected (0.006 seconds)
> {code}
> execute sql
> {code:java}
> select * from test where id in (select id from test where a=33) order by id 
> desc;
> {code}
> throw exception
> {code:java}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1533266754845.b8e521d4dc8e8b8f18c69cc7ef76973d.: The next hint must 
> come after previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:264)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
> at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
> at 
> org.apache.hadoop.hbase.regionserver.ScanQueryM

[jira] [Updated] (PHOENIX-4647) Column header doesn't handle optional arguments correctly

2018-08-07 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4647:
-
Attachment: (was: PHOENIX-4647.master.002.patch)

> Column header doesn't handle optional arguments correctly
> -
>
> Key: PHOENIX-4647
> URL: https://issues.apache.org/jira/browse/PHOENIX-4647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Shehzaad Nakhoda
>Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-4647.4.x-HBase-1.3.002.patch, 
> PHOENIX-4647.master.001.patch
>
>
> SUBSTR(NAME, 1)
> being rendered as 
> SUBSTR(NAME, 1, )
> in things like column headings.
> For example:
> 0: jdbc:phoenix:> create table hello_table (ID DECIMAL PRIMARY KEY, NAME 
> VARCHAR);
> No rows affected (1.252 seconds)
> 0: jdbc:phoenix:> upsert into hello_table values(1, 'abc');
> 1 row affected (0.025 seconds)
> 0: jdbc:phoenix:> select substr(name, 1) from hello_table;
> ++
> | SUBSTR(NAME, 1, )  |
> ++
> | abc|
> ++
> Looks to me like there's a bug - 
> SUBSTR(NAME, 1) should be represented as SUBSTR(NAME, 1) not as SUBSTR(NAME, 
> 1, )



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4647) Column header doesn't handle optional arguments correctly

2018-08-07 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4647:
-
Attachment: PHOENIX-4647.4.x-HBase-1.3.002.patch

> Column header doesn't handle optional arguments correctly
> -
>
> Key: PHOENIX-4647
> URL: https://issues.apache.org/jira/browse/PHOENIX-4647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Shehzaad Nakhoda
>Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-4647.4.x-HBase-1.3.002.patch, 
> PHOENIX-4647.master.001.patch
>
>
> SUBSTR(NAME, 1)
> being rendered as 
> SUBSTR(NAME, 1, )
> in things like column headings.
> For example:
> 0: jdbc:phoenix:> create table hello_table (ID DECIMAL PRIMARY KEY, NAME 
> VARCHAR);
> No rows affected (1.252 seconds)
> 0: jdbc:phoenix:> upsert into hello_table values(1, 'abc');
> 1 row affected (0.025 seconds)
> 0: jdbc:phoenix:> select substr(name, 1) from hello_table;
> ++
> | SUBSTR(NAME, 1, )  |
> ++
> | abc|
> ++
> Looks to me like there's a bug - 
> SUBSTR(NAME, 1) should be represented as SUBSTR(NAME, 1) not as SUBSTR(NAME, 
> 1, )



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4647) Column header doesn't handle optional arguments correctly

2018-08-07 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4647:
-
Attachment: PHOENIX-4647.master.002.patch

> Column header doesn't handle optional arguments correctly
> -
>
> Key: PHOENIX-4647
> URL: https://issues.apache.org/jira/browse/PHOENIX-4647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Shehzaad Nakhoda
>Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-4647.master.001.patch, 
> PHOENIX-4647.master.002.patch
>
>
> SUBSTR(NAME, 1)
> being rendered as 
> SUBSTR(NAME, 1, )
> in things like column headings.
> For example:
> 0: jdbc:phoenix:> create table hello_table (ID DECIMAL PRIMARY KEY, NAME 
> VARCHAR);
> No rows affected (1.252 seconds)
> 0: jdbc:phoenix:> upsert into hello_table values(1, 'abc');
> 1 row affected (0.025 seconds)
> 0: jdbc:phoenix:> select substr(name, 1) from hello_table;
> ++
> | SUBSTR(NAME, 1, )  |
> ++
> | abc|
> ++
> Looks to me like there's a bug - 
> SUBSTR(NAME, 1) should be represented as SUBSTR(NAME, 1) not as SUBSTR(NAME, 
> 1, )



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1718) Unable to find cached index metadata during the stablity test with phoenix

2018-08-06 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-1718:
-
Description: 
I am making stablity test with phoenix 4.2.1 . But the regionserver became very 
slow after 4 hours , and i found some error log in the regionserver log file.

In this scenario,the cluster has 8 machines(128G ram, 24 cores , 48T disk). i 
setup 2 regionserver in each pc (total 16 rs).

1. create 8 tables, each table contains an index from TEST_USER0 to TEST_USER7.

create table TEST_USER0 (id varchar primary key , attr1 varchar, attr2 
varchar,attr3 varchar,attr4 varchar,attr5 varchar,attr6 integer,attr7 
integer,attr8 integer,attr9 integer,attr10 integer ) 
DATA_BLOCK_ENCODING='FAST_DIFF',VERSIONS=1,BLOOMFILTER='ROW',COMPRESSION='LZ4',BLOCKSIZE
 = '65536',SALT_BUCKETS=32;
create local index TEST_USER_INDEX0 on 
TEST5.TEST_USER0(attr1,attr2,attr3,attr4,attr5,attr6,attr7,attr8,attr9,attr10);


2. deploy phoenix client each machine to upsert data to tables. ( client1 
upsert into TEST_USER0 , client 2 upsert into TEST_USER1.)
One phoenix client start 6 threads, and each thread upsert 10,000 rows in a 
batch. and each thread will upsert 500,000,000 in totally.
8 clients ran in same time.

the log as belowRunning 4 hours later, threre were about 1,000,000,000 rows in 
hbase, and error occur frequently at about running 4 hours and 50 minutes , and 
the rps became very slow , less than 10,000 (7, in normal) .

2015-03-09 19:15:13,337 ERROR [B.DefaultRpcServer.handler=2,queue=2,port=60022] 
parallel.BaseTaskRunner: Found a failed task because: 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
(INT10): Unable to find cached index metadata. key=-1715879467965695792 
region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
Index update failed
java.util.concurrent.ExecutionException: 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
(INT10): Unable to find cached index metadata. key=-1715879467965695792 
region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
Index update failed
at 
com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
at 
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
at 
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
at 
org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:140)
at 
org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:274)
at org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:203)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:881)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1522)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1597)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1554)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:877)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2476)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2263)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2215)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2219)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4376)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3580)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3469)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29931)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): 
ERROR 2008 (INT10): Unable to find cached index metadata. 
key=-1715879467965695792 
region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
Index update failed
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:76)
at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52

[jira] [Updated] (PHOENIX-4476) Range scan used for point lookups if filter is not in order of primary keys

2018-08-06 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4476:
-
Attachment: PHOENIX-4476-4.x-HBase-1.3.003.patch

> Range scan used for point lookups if filter is not in order of primary keys
> ---
>
> Key: PHOENIX-4476
> URL: https://issues.apache.org/jira/browse/PHOENIX-4476
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Mujtaba Chohan
>Assignee: Xu Cang
>Priority: Major
>  Labels: SFDC
> Attachments: PHOENIX-4476-4.x-HBase-1.3.002.patch, 
> PHOENIX-4476-4.x-HBase-1.3.003.patch
>
>
> {noformat}
> DROP TABLE TEST;
> CREATE TABLE IF NOT EXISTS TEST (
> PK1 CHAR(1) NOT NULL,
> PK2 VARCHAR NOT NULL,
> PK3 VARCHAR NOT NULL,
> PK4 UNSIGNED_LONG NOT NULL,
> PK5 VARCHAR NOT NULL,
> V1 VARCHAR,
> V2 VARCHAR,
> V3 UNSIGNED_LONG
> CONSTRAINT state_pk PRIMARY KEY (
>   PK1,
>   PK2,
>   PK3,
>   PK4,
>   PK5
> )
> );
> // Incorrect explain plan with un-ordered PKs
> EXPLAIN SELECT V1 FROM TEST WHERE (PK1, PK5, PK2, PK3, PK4) IN (('A', 'E', 
> 'N', 'T', 3), ('A', 'Y', 'G', 'T', 4)); 
> +--+--+--+-+
> |   PLAN   |  EST_BYTES_READ  
> |  EST_ROWS_READ   | |
> +--+--+--+-+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER TEST ['A'] | null 
> | null   |
> | SERVER FILTER BY (PK1, PK5, PK2, PK3, PK4) IN 
> ([65,69,0,78,0,84,0,0,0,0,0,0,0,0,3],[65,89,0,71,0,84,0,0,0,0,0,0,0,0,4]) | 
> null   |
> +--+--+--+-+
> // Correct explain plan with PKs in order
> EXPLAIN SELECT V1 FROM TEST WHERE (PK1,PK2,PK3,PK4,PK5) IN (('A', 'E', 'N',3, 
> 'T'),('A', 'Y', 'G', 4, 'T')); 
> +--+--+--+-+
> |   PLAN   |  EST_BYTES_READ  
> |  EST_ROWS_READ   | |
> +--+--+--+-+
> | CLIENT 1-CHUNK 2 ROWS 712 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 
> 2 KEYS OVER TEST | 712  | |
> +--+--+--+-+
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4833) Function Undefined when using nested sub query

2018-08-01 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4833:
-
Description: 
Function Undefined when using nested sub query

  

 0: jdbc:phoenix:scipnode1,scipnode2,scipnode3> SELECT usagekb(strstatscount) 
as dd from (select to_char(SUM(strstatscount),'#') from  (SELECT 
SUM(DOWNLOADDATA)AS strstatscount FROM JIOANDSF.TBLRAWOFFLOADDATA   ))  ;
Error: ERROR 6001 (42F01): Function undefined. functionName=USAGEKB 
(state=42F01,code=6001)
org.apache.phoenix.schema.FunctionNotFoundException: ERROR 6001 (42F01): 
Function undefined. functionName=USAGEKB
    at 
org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.resolveFunction(FromCompiler.java:725)
    at 
org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:327)
    at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:696)
    at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:585)
    at 
org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:86)
    at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
    at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:561)
    at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:522)
    at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:202)
    at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:157)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:476)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:442)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:300)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:290)
    at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:289)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1742)
    at sqlline.Commands.execute(Commands.java:822)
    at sqlline.Commands.sql(Commands.java:732)
    at sqlline.SqlLine.dispatch(SqlLine.java:813)
    at sqlline.SqlLine.begin(SqlLine.java:686)
    at sqlline.SqlLine.start(SqlLine.java:398)
    at sqlline.SqlLine.main(SqlLine.java:291)

 

0: jdbc:phoenix:scipnode1,scipnode2,scipnode3> SELECT usagekb(strstatscount) 
FROM (SELECT to_char(SUM(DOWNLOADDATA),'#') AS strstatscount FROM 
JIOANDSF.TBLRAWOFFLOADDATA   HAVING COUNT(*) > 0  )  ;
 +--+
|USAGEINKB(TO_CHAR(SUM(A.DOWNLOADDATA))) |

+--+
|7.82GB  |

+--+
1 row selected (0.033 seconds)
0: jdbc:phoenix:scipnode1,scipnode2,scipnode3>

  was:
Function Undefined when using nested sub query

  

 0: jdbc:phoenix:scipnode1,scipnode2,scipnode3> SELECT usagekb(strstatscount) 
as dd from (select to_char(SUM(strstatscount),'#') from  (SELECT 
SUM(DOWNLOADDATA)AS strstatscount FROM JIOANDSF.TBLRAWOFFLOADDATA   ))  ;
Error: ERROR 6001 (42F01): Function undefined. functionName=USAGEKB 
(state=42F01,code=6001)
org.apache.phoenix.schema.FunctionNotFoundException: ERROR 6001 (42F01): 
Function undefined. functionName=USAGEKB
    at 
org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.resolveFunction(FromCompiler.java:725)
    at 
org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:327)
    at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:696)
    at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:585)
    at 
org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:86)
    at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
    at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:561)
    at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:522)
    at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:202)
    at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:157)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:476)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixSta

[jira] [Updated] (PHOENIX-4476) Range scan used for point lookups if filter is not in order of primary keys

2018-08-01 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4476:
-
Attachment: (was: PHOENIX-4476-workInProgress.001.patch)

> Range scan used for point lookups if filter is not in order of primary keys
> ---
>
> Key: PHOENIX-4476
> URL: https://issues.apache.org/jira/browse/PHOENIX-4476
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Mujtaba Chohan
>Assignee: Xu Cang
>Priority: Major
>  Labels: SFDC
> Attachments: PHOENIX-4476-4.x-HBase-1.3.002.patch
>
>
> {noformat}
> DROP TABLE TEST;
> CREATE TABLE IF NOT EXISTS TEST (
> PK1 CHAR(1) NOT NULL,
> PK2 VARCHAR NOT NULL,
> PK3 VARCHAR NOT NULL,
> PK4 UNSIGNED_LONG NOT NULL,
> PK5 VARCHAR NOT NULL,
> V1 VARCHAR,
> V2 VARCHAR,
> V3 UNSIGNED_LONG
> CONSTRAINT state_pk PRIMARY KEY (
>   PK1,
>   PK2,
>   PK3,
>   PK4,
>   PK5
> )
> );
> // Incorrect explain plan with un-ordered PKs
> EXPLAIN SELECT V1 FROM TEST WHERE (PK1, PK5, PK2, PK3, PK4) IN (('A', 'E', 
> 'N', 'T', 3), ('A', 'Y', 'G', 'T', 4)); 
> +--+--+--+-+
> |   PLAN   |  EST_BYTES_READ  
> |  EST_ROWS_READ   | |
> +--+--+--+-+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER TEST ['A'] | null 
> | null   |
> | SERVER FILTER BY (PK1, PK5, PK2, PK3, PK4) IN 
> ([65,69,0,78,0,84,0,0,0,0,0,0,0,0,3],[65,89,0,71,0,84,0,0,0,0,0,0,0,0,4]) | 
> null   |
> +--+--+--+-+
> // Correct explain plan with PKs in order
> EXPLAIN SELECT V1 FROM TEST WHERE (PK1,PK2,PK3,PK4,PK5) IN (('A', 'E', 'N',3, 
> 'T'),('A', 'Y', 'G', 4, 'T')); 
> +--+--+--+-+
> |   PLAN   |  EST_BYTES_READ  
> |  EST_ROWS_READ   | |
> +--+--+--+-+
> | CLIENT 1-CHUNK 2 ROWS 712 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 
> 2 KEYS OVER TEST | 712  | |
> +--+--+--+-+
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4476) Range scan used for point lookups if filter is not in order of primary keys

2018-08-01 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4476:
-
Attachment: PHOENIX-4476-4.x-HBase-1.3.002.patch

> Range scan used for point lookups if filter is not in order of primary keys
> ---
>
> Key: PHOENIX-4476
> URL: https://issues.apache.org/jira/browse/PHOENIX-4476
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Mujtaba Chohan
>Assignee: Xu Cang
>Priority: Major
>  Labels: SFDC
> Attachments: PHOENIX-4476-4.x-HBase-1.3.002.patch, 
> PHOENIX-4476-workInProgress.001.patch
>
>
> {noformat}
> DROP TABLE TEST;
> CREATE TABLE IF NOT EXISTS TEST (
> PK1 CHAR(1) NOT NULL,
> PK2 VARCHAR NOT NULL,
> PK3 VARCHAR NOT NULL,
> PK4 UNSIGNED_LONG NOT NULL,
> PK5 VARCHAR NOT NULL,
> V1 VARCHAR,
> V2 VARCHAR,
> V3 UNSIGNED_LONG
> CONSTRAINT state_pk PRIMARY KEY (
>   PK1,
>   PK2,
>   PK3,
>   PK4,
>   PK5
> )
> );
> // Incorrect explain plan with un-ordered PKs
> EXPLAIN SELECT V1 FROM TEST WHERE (PK1, PK5, PK2, PK3, PK4) IN (('A', 'E', 
> 'N', 'T', 3), ('A', 'Y', 'G', 'T', 4)); 
> +--+--+--+-+
> |   PLAN   |  EST_BYTES_READ  
> |  EST_ROWS_READ   | |
> +--+--+--+-+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER TEST ['A'] | null 
> | null   |
> | SERVER FILTER BY (PK1, PK5, PK2, PK3, PK4) IN 
> ([65,69,0,78,0,84,0,0,0,0,0,0,0,0,3],[65,89,0,71,0,84,0,0,0,0,0,0,0,0,4]) | 
> null   |
> +--+--+--+-+
> // Correct explain plan with PKs in order
> EXPLAIN SELECT V1 FROM TEST WHERE (PK1,PK2,PK3,PK4,PK5) IN (('A', 'E', 'N',3, 
> 'T'),('A', 'Y', 'G', 4, 'T')); 
> +--+--+--+-+
> |   PLAN   |  EST_BYTES_READ  
> |  EST_ROWS_READ   | |
> +--+--+--+-+
> | CLIENT 1-CHUNK 2 ROWS 712 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 
> 2 KEYS OVER TEST | 712  | |
> +--+--+--+-+
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4476) Range scan used for point lookups if filter is not in order of primary keys

2018-08-01 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4476:
-
Comment: was deleted

(was: testing comment. (to test devlist notification fix requested by Thomas))

> Range scan used for point lookups if filter is not in order of primary keys
> ---
>
> Key: PHOENIX-4476
> URL: https://issues.apache.org/jira/browse/PHOENIX-4476
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Mujtaba Chohan
>Assignee: Xu Cang
>Priority: Major
>  Labels: SFDC
> Attachments: PHOENIX-4476-workInProgress.001.patch
>
>
> {noformat}
> DROP TABLE TEST;
> CREATE TABLE IF NOT EXISTS TEST (
> PK1 CHAR(1) NOT NULL,
> PK2 VARCHAR NOT NULL,
> PK3 VARCHAR NOT NULL,
> PK4 UNSIGNED_LONG NOT NULL,
> PK5 VARCHAR NOT NULL,
> V1 VARCHAR,
> V2 VARCHAR,
> V3 UNSIGNED_LONG
> CONSTRAINT state_pk PRIMARY KEY (
>   PK1,
>   PK2,
>   PK3,
>   PK4,
>   PK5
> )
> );
> // Incorrect explain plan with un-ordered PKs
> EXPLAIN SELECT V1 FROM TEST WHERE (PK1, PK5, PK2, PK3, PK4) IN (('A', 'E', 
> 'N', 'T', 3), ('A', 'Y', 'G', 'T', 4)); 
> +--+--+--+-+
> |   PLAN   |  EST_BYTES_READ  
> |  EST_ROWS_READ   | |
> +--+--+--+-+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER TEST ['A'] | null 
> | null   |
> | SERVER FILTER BY (PK1, PK5, PK2, PK3, PK4) IN 
> ([65,69,0,78,0,84,0,0,0,0,0,0,0,0,3],[65,89,0,71,0,84,0,0,0,0,0,0,0,0,4]) | 
> null   |
> +--+--+--+-+
> // Correct explain plan with PKs in order
> EXPLAIN SELECT V1 FROM TEST WHERE (PK1,PK2,PK3,PK4,PK5) IN (('A', 'E', 'N',3, 
> 'T'),('A', 'Y', 'G', 4, 'T')); 
> +--+--+--+-+
> |   PLAN   |  EST_BYTES_READ  
> |  EST_ROWS_READ   | |
> +--+--+--+-+
> | CLIENT 1-CHUNK 2 ROWS 712 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 
> 2 KEYS OVER TEST | 712  | |
> +--+--+--+-+
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4647) Column header doesn't handle optional arguments correctly

2018-07-29 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang reassigned PHOENIX-4647:


Assignee: Xu Cang

> Column header doesn't handle optional arguments correctly
> -
>
> Key: PHOENIX-4647
> URL: https://issues.apache.org/jira/browse/PHOENIX-4647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Shehzaad Nakhoda
>Assignee: Xu Cang
>Priority: Major
>
> SUBSTR(NAME, 1)
> being rendered as 
> SUBSTR(NAME, 1, )
> in things like column headings.
> For example:
> 0: jdbc:phoenix:> create table hello_table (ID DECIMAL PRIMARY KEY, NAME 
> VARCHAR);
> No rows affected (1.252 seconds)
> 0: jdbc:phoenix:> upsert into hello_table values(1, 'abc');
> 1 row affected (0.025 seconds)
> 0: jdbc:phoenix:> select substr(name, 1) from hello_table;
> ++
> | SUBSTR(NAME, 1, )  |
> ++
> | abc|
> ++
> Looks to me like there's a bug - 
> SUBSTR(NAME, 1) should be represented as SUBSTR(NAME, 1) not as SUBSTR(NAME, 
> 1, )



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4597) Do not initalize phoenixTransactionContext in MutationState if transactions are not enabled.

2018-07-28 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4597:
-
Attachment: PHOENIX-4597.master.001.patch

> Do not initalize phoenixTransactionContext in MutationState if transactions 
> are not enabled.
> 
>
> Key: PHOENIX-4597
> URL: https://issues.apache.org/jira/browse/PHOENIX-4597
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-4597.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4597) Do not initalize phoenixTransactionContext in MutationState if transactions are not enabled.

2018-07-28 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang reassigned PHOENIX-4597:


Assignee: Xu Cang

> Do not initalize phoenixTransactionContext in MutationState if transactions 
> are not enabled.
> 
>
> Key: PHOENIX-4597
> URL: https://issues.apache.org/jira/browse/PHOENIX-4597
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Xu Cang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4780) HTable.batch() doesn't handle TableNotFound correctly.

2018-06-12 Thread Xu Cang (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510300#comment-16510300
 ] 

Xu Cang commented on PHOENIX-4780:
--

same this I think: https://issues.apache.org/jira/browse/HBASE-20621

> HTable.batch() doesn't handle TableNotFound correctly.
> --
>
> Key: PHOENIX-4780
> URL: https://issues.apache.org/jira/browse/PHOENIX-4780
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Minor
>
> batch() as well as delete() are processing using AsyncRequest. To report 
> about problems we are using RetriesExhaustedWithDetailsException and there is 
> no special handling for TableNotFound exception. So, the final result for 
> running batch or delete operations against not existing table looks really 
> weird and missleading:
> {noformat}
> hbase(main):003:0> delete 't1', 'r1', 'c1'
> 2018-06-12 15:02:50,742 ERROR [main] client.AsyncRequestFutureImpl: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"r1","families":{"c1":[{"qualifier":"","vlen":0,"tag":[],"timestamp":9223372036854775807}]},"ts":9223372036854775807}
> ERROR: Failed 1 action: t1: 1 time, servers with issues: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-11 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472825#comment-16472825
 ] 

Xu Cang commented on PHOENIX-4726:
--

uploaded new patch. It addressed Vincent's comment.

> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>    Assignee: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-4726.4.patch, PHOENIX-4726.patch.1, 
> PHOENIX-4726.patch.2, PHOENIX-4726.patch.3
>
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-11 Thread Xu Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4726:
-
Attachment: PHOENIX-4726.4.patch

> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>    Assignee: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-4726.4.patch, PHOENIX-4726.patch.1, 
> PHOENIX-4726.patch.2, PHOENIX-4726.patch.3
>
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-11 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472667#comment-16472667
 ] 

Xu Cang commented on PHOENIX-4726:
--

"Never mind - I see now you're using a dynamic column. Is that the way we want 
it to be? "


– Xu:  Yes, we had 2 similar timestamps already as dynamic columns. 

> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-4726.patch.1, PHOENIX-4726.patch.2, 
> PHOENIX-4726.patch.3
>
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-11 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472668#comment-16472668
 ] 

Xu Cang commented on PHOENIX-4726:
--

"[~xucang] Can you use EnvironmentEdgeManager.currentTimeMillis() instead of 
System.currentTimeMillis() ?

Thanks!"

 

--Xu: will do.

> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-4726.patch.1, PHOENIX-4726.patch.2, 
> PHOENIX-4726.patch.3
>
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-10 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471250#comment-16471250
 ] 

Xu Cang commented on PHOENIX-4726:
--

[~tdsilva]

> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>    Assignee: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-4726.patch.1, PHOENIX-4726.patch.2, 
> PHOENIX-4726.patch.3
>
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-09 Thread Xu Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang reassigned PHOENIX-4726:


Assignee: Xu Cang

> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>    Assignee: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-4726.patch.1, PHOENIX-4726.patch.2, 
> PHOENIX-4726.patch.3
>
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-09 Thread Xu Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4726:
-
Attachment: PHOENIX-4726.patch.3

> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-4726.patch.1, PHOENIX-4726.patch.2, 
> PHOENIX-4726.patch.3
>
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-09 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469519#comment-16469519
 ] 

Xu Cang commented on PHOENIX-4726:
--

[~vincentpoon] thanks. I changed it in patch3.

> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-4726.patch.1, PHOENIX-4726.patch.2, 
> PHOENIX-4726.patch.3
>
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4282) PhoenixMRJobSubmitter submits duplicate MR jobs for an index.

2018-05-09 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469472#comment-16469472
 ] 

Xu Cang commented on PHOENIX-4282:
--

[~tdsilva] mind taking a look? thank.

> PhoenixMRJobSubmitter submits duplicate MR jobs for an index.
> -
>
> Key: PHOENIX-4282
> URL: https://issues.apache.org/jira/browse/PHOENIX-4282
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>    Assignee: Xu Cang
>Priority: Major
> Attachments: PHOENIX-4282.patch.1
>
>
> For async indexes even if there is an existing MR job that is building the 
> index, PhoenixMRJobSubmitter will submit a new job every 15 minutes until the 
> index is built. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4282) PhoenixMRJobSubmitter submits duplicate MR jobs for an index.

2018-05-07 Thread Xu Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang reassigned PHOENIX-4282:


Assignee: Xu Cang

> PhoenixMRJobSubmitter submits duplicate MR jobs for an index.
> -
>
> Key: PHOENIX-4282
> URL: https://issues.apache.org/jira/browse/PHOENIX-4282
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>    Assignee: Xu Cang
>Priority: Major
>
> For async indexes even if there is an existing MR job that is building the 
> index, PhoenixMRJobSubmitter will submit a new job every 15 minutes until the 
> index is built. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-07 Thread Xu Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4726:
-
Attachment: PHOENIX-4726.patch.2

> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-4726.patch.1, PHOENIX-4726.patch.2
>
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-07 Thread Xu Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4726:
-
Attachment: (was: PHOENIX-4726.patch.1)

> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-4726.patch.1
>
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-07 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466493#comment-16466493
 ] 

Xu Cang commented on PHOENIX-4726:
--

uploaded a patch.

Testing step:
 # create a table. upsert some data.
 # Create a sync index. (not async way)
 # check the 'SYNC_INDEX_CREATED_DATE' value

Testing oupput as below,

0: jdbc:phoenix:> select TABLE_SCHEM, 
TABLE_NAME,COLUMN_NAME,SYNC_INDEX_CREATED_DATE from 
SYSTEM.CATALOG(SYNC_INDEX_CREATED_DATE DATE) where SYNC_INDEX_CREATED_DATE is 
not NULL;
+--+-+--+--+
| TABLE_SCHEM  | TABLE_NAME  | COLUMN_NAME  | SYNC_INDEX_CREATED_DATE  |
+--+-+--+--+
|  | MY_IDX2 |  | 2018-05-07 21:08:38.780  |
|  | MY_TABLE    |  | 2018-05-07 21:01:55.806  |
|  | MY_TABLE2   |  | 2018-05-07 21:02:38.250  |
| SYSTEM   | CATALOG |  | 2018-05-07 20:55:42.008  |
| SYSTEM   | FUNCTION    |  | 2018-05-07 20:55:47.547  |
| SYSTEM   | SEQUENCE    |  | 2018-05-07 20:55:44.635  |
| SYSTEM   | STATS   |  | 2018-05-07 20:55:45.939  |
+--+-+--+--+
7 rows selected (0.105 seconds)
0: jdbc:phoenix:>

> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-4726.patch.1
>
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-07 Thread Xu Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4726:
-
Attachment: PHOENIX-4726.patch.1

> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>Priority: Minor
> Attachments: PHOENIX-4726.patch.1
>
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-07 Thread Xu Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4726:
-
Description: 
save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
ASYNC_CREATED_DATE

("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)

 

Check IndexUtil.java for related code.

The reason this can be useful is: We saw a case index state stuck in 'b' for 
quite some long time. And without a timestamp to indicate where it started, 
it's hard to tell if this is a legit running task or stuck...

 

 

 

 

  was:
save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP, 

Check IndexUtil.java for related code.

The reason this can be useful is: We saw a case index state stuck in 'b' for 
quite some long time. And without a timestamp to indicate where it started, 
it's hard to tell if this is a legit running task or stuck...

Summary: save index build timestamp -- for SYNC case only.  (was: save 
index build timestamp)

> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xu Cang
>Priority: Minor
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4726) save index build timestamp

2018-05-07 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466375#comment-16466375
 ] 

Xu Cang commented on PHOENIX-4726:
--

In MetaDataClient.java method createTableIneternal, (which is called by 
createIndex)

We don't set any timestamp when ''asyncCreatedDate" is null. 

I will explore a way to log this too..Maybe as a new timestamp called 
"SYNC_INDEX_CREATED_DATE"

> save index build timestamp
> --
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xu Cang
>Priority: Minor
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP, 
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4726) save index build timestamp

2018-05-07 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466317#comment-16466317
 ] 

Xu Cang commented on PHOENIX-4726:
--

select  ASYNC_CREATED_DATE  from SYSTEM.CATALOG(ASYNC_CREATED_DATE DATE) where 
ASYNC_CREATED_DATE is not NULL order by ASYNC_CREATED_DATE desc limit 5;

 

You are right, Vincent. By using above query I can verify this date is set for 
Async index build start timestamp.

> save index build timestamp
> --
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>Priority: Minor
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP, 
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4726) save index build timestamp

2018-05-07 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466179#comment-16466179
 ] 

Xu Cang commented on PHOENIX-4726:
--

[~vincentpoon]  would like to hear your opinion on this too. Thanks. 

> save index build timestamp
> --
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>Priority: Minor
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP, 
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4724) Efficient Equi-Depth histogram for streaming data

2018-05-07 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466121#comment-16466121
 ] 

Xu Cang commented on PHOENIX-4724:
--

[~aertoria]

I am not speaking for Vincent, but my understanding is, this method will be 
used when a user wants to build an index. This is a one-time effort based on 
current table situation (or you can call it a snapshot). So there is no use 
case requires removeValue() in this building index scenario. 

> Efficient Equi-Depth histogram for streaming data
> -
>
> Key: PHOENIX-4724
> URL: https://issues.apache.org/jira/browse/PHOENIX-4724
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4724.v1.patch
>
>
> Equi-Depth histogram from 
> http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf, but 
> without the sliding window - we assume a single window over the entire data 
> set.
> Used to generate the bucket boundaries of a histogram where each bucket has 
> the same # of items.
> This is useful, for example, for pre-splitting an index table, by feeding in 
> data from the indexed column.
> Works on streaming data - the histogram is dynamically updated for each new 
> value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4726) save index build timestamp

2018-05-04 Thread Xu Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4726:
-
Priority: Minor  (was: Major)

> save index build timestamp
> --
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Xu Cang
>Priority: Minor
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP, 
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4726) save index build timestamp

2018-05-04 Thread Xu Cang (JIRA)
Xu Cang created PHOENIX-4726:


 Summary: save index build timestamp
 Key: PHOENIX-4726
 URL: https://issues.apache.org/jira/browse/PHOENIX-4726
 Project: Phoenix
  Issue Type: Improvement
Reporter: Xu Cang


save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP, 

Check IndexUtil.java for related code.

The reason this can be useful is: We saw a case index state stuck in 'b' for 
quite some long time. And without a timestamp to indicate where it started, 
it's hard to tell if this is a legit running task or stuck...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: phoenix newbie build question

2018-02-13 Thread Xu Cang
Hi Josh,

Thanks for your reply. I got java1.8 and maven 3.3.9 as below.

Apache Maven 3.3.9
Maven home: /usr/share/maven
Java version: 1.8.0_151, vendor: Oracle Corporation

Ok. Sounds good. Thank you.

Xu

On Tue, Feb 13, 2018 at 4:55 PM, Josh Elser <els...@apache.org> wrote:

> Hi Xu,
>
> What version of Java and Maven are you using?
>
> I wouldn't be super worried about the test failures -- it's likely just an
> indication that the unit test is reliant on something in the local
> environment which isn't there on your computer (e.g. a default krb5.conf).
> Ideally, we can figure out why it failed and fix it for the future, but
> would need to get to the bottom of it..
>
>
> On 2/13/18 6:51 PM, Xu Cang wrote:
>
>> Hi,
>>
>> I am trying to build Phoenix (on Ubuntu) and run tests by following
>> 'build.txt' instruction from code repo.
>>
>> Commands I ran:
>>
>> 1. mvn install -DskipTests
>> 2. mvn process-sources
>> 3. mvn package
>>
>> Thenm I got this error:
>>
>> [ERROR]
>> testMultipleConnectionsAsSameUserWithoutLogin(org.apache.pho
>> enix.jdbc.SecureUserConnectionsTest)
>> Time elapsed: 0.013 s  <<< ERROR!
>> java.lang.RuntimeException: Couldn't get the current user!!
>> at
>> org.apache.phoenix.jdbc.SecureUserConnectionsTest.testMultip
>> leConnectionsAsSameUserWithoutLogin(SecureUserConnectionsTest.java:378)
>>
>> [INFO]
>> [INFO] Results:
>> [INFO]
>> [ERROR] Errors:
>> [ERROR]
>>   SecureUserConnectionsTest.testMultipleConnectionsAsSameUserW
>> ithoutLogin:378
>> Runtime
>> [INFO]
>> [ERROR] Tests run: 1592, Failures: 0, Errors: 1, Skipped: 3
>> [INFO]
>> [INFO]
>> 
>> [INFO] Reactor Summary:
>> [INFO]
>> [INFO] Apache Phoenix . SUCCESS [
>> 0.924 s]
>> [INFO] Phoenix Core ... FAILURE [
>> 35.155 s]
>>
>>
>> The error comes from this code piece:
>>
>> *try {*
>> *this.user = User.getCurrent();*
>> *} catch (IOException e) {*
>> *throw new RuntimeException("Couldn't get the current
>> user!!");*
>> *}*
>>
>>
>> My question is, am I missing any dependencies in order to get this user?
>> Any pointer or help is appreciated.  Thanks,
>>
>>
>> (BTW, IndexUtilTest.java unit test ran successfully. )
>>
>> Best Regards,
>> Xu
>>
>>


phoenix newbie build question

2018-02-13 Thread Xu Cang
Hi,

I am trying to build Phoenix (on Ubuntu) and run tests by following
'build.txt' instruction from code repo.

Commands I ran:

1. mvn install -DskipTests
2. mvn process-sources
3. mvn package

Thenm I got this error:

[ERROR]
testMultipleConnectionsAsSameUserWithoutLogin(org.apache.phoenix.jdbc.SecureUserConnectionsTest)
Time elapsed: 0.013 s  <<< ERROR!
java.lang.RuntimeException: Couldn't get the current user!!
at
org.apache.phoenix.jdbc.SecureUserConnectionsTest.testMultipleConnectionsAsSameUserWithoutLogin(SecureUserConnectionsTest.java:378)

[INFO]
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR]
 SecureUserConnectionsTest.testMultipleConnectionsAsSameUserWithoutLogin:378
Runtime
[INFO]
[ERROR] Tests run: 1592, Failures: 0, Errors: 1, Skipped: 3
[INFO]
[INFO]

[INFO] Reactor Summary:
[INFO]
[INFO] Apache Phoenix . SUCCESS [
0.924 s]
[INFO] Phoenix Core ... FAILURE [
35.155 s]


The error comes from this code piece:

*try {*
*this.user = User.getCurrent();*
*} catch (IOException e) {*
*throw new RuntimeException("Couldn't get the current
user!!");*
*}*


My question is, am I missing any dependencies in order to get this user?
Any pointer or help is appreciated.  Thanks,


(BTW, IndexUtilTest.java unit test ran successfully. )

Best Regards,
Xu