[ 
https://issues.apache.org/jira/browse/PHOENIX-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7282:
---------------------------------
    Summary: Incorrect data in index column for corresponding BIGINT type 
column in data table  (was: Incorrect data in index column for corresponding 
BIGIT type column in data table)

> Incorrect data in index column for corresponding BIGINT type column in data 
> table
> ---------------------------------------------------------------------------------
>
>                 Key: PHOENIX-7282
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-7282
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 5.3.0
>            Reporter: Sanjeet Malhotra
>            Assignee: Szucs Villo
>            Priority: Major
>
> If we add a new column of type BIGINT to an existing data table and use 
> CASCADE INDEX option then the column in index is aded as DECIMAL type. If we 
> query such a column and the query plan gets resolved to use index table 
> instead of data table then value returned will be different from the case 
> when data table would have been used by query plan.
> IT to reproduce:
> {code:java}
> @Test
> public void testBigIntData() throws Exception {
>     String dataTableName = generateUniqueName();
>     String indexName = generateUniqueName();
>     try(Connection conn = DriverManager.getConnection(getUrl())) {
>         conn.createStatement().execute("create table " + dataTableName
>                 + " (id varchar not null primary key, col1 integer)");
>         conn.createStatement().execute("create index " + indexName + " on " + 
> dataTableName + " (col1)");
>         conn.createStatement().execute("alter table " + dataTableName + " add 
> if not exists col3 bigint cascade index all");
>         conn.createStatement().execute("upsert into " + dataTableName + " 
> (id, col3) values ('a', 3)");
>         conn.commit();
>         ResultSet rs = conn.createStatement().executeQuery("select col3 from 
> " + dataTableName);
>         while(rs.next()) {
>             System.out.println(rs.getObject(1));
>         }
>     }
> } {code}
> {{So far this issue has been observed when new column of type BIGINT is added 
> to data table/view. But if there is already a column of type BIGINT in the 
> data table.view then above error is not observed even if query uses index 
> table as per query plan.}}
>  
> {{Further findings so far:}}
>  # {{During alter table when we add new column in data table/view of type 
> BIGINT then we also add a column of type DECIMAL (and not BIGINT) to 
> corresponding index/view index.}}
>  # {{Above finding is not true for create table but alter table only.}}
>  # {{We write to data table/view in column of BIGIT type and also write same 
> *byte array* in index/view index also but in a column of DECIMAL type. Byte 
> array written in data table at HBase layer was serialized via {{PLong}} class 
> but as we write same byte array in index column so at the time of reading 
> value (from index table as per query plan) the byte array gets desrialized by 
> {{PDecimal}} class. As we are not using compatible serializaion and 
> deserialization logic for index column so, a value in data table becomes 
> totally another value in index table when read. Serialization logic of PLong 
> and PDecimal are completely different thus, so are their deserialization 
> logics.}}
>  ## {{One such example we saw was that in above IT (for reproducing the 
> error) we insert 3 in data table column (of type BIGINT and newly added via 
> alter) but the corresponding desrialized value in index is 
> `-1.010101010098E+126`}}
> {{Still need to figure out why this issue only happens for new columns added 
> via alter DDL but not via create DDL at table/view creation time.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to