[ 
https://issues.apache.org/jira/browse/IGNITE-16200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maria Makedonskaya reassigned IGNITE-16200:
-------------------------------------------

    Assignee: Maria Makedonskaya

> Extending column size (varchar) via alter table does not persist after node / 
> cluster restart
> ---------------------------------------------------------------------------------------------
>
>                 Key: IGNITE-16200
>                 URL: https://issues.apache.org/jira/browse/IGNITE-16200
>             Project: Ignite
>          Issue Type: Bug
>          Components: sql
>    Affects Versions: 2.7.6, 2.10, 2.9.1, 2.11.1
>         Environment: Centos 7.9
> Ignite versions 2.7, 2.9 2.10, 2.11
>            Reporter: Vanja Pandzic
>            Assignee: Maria Makedonskaya
>            Priority: Minor
>
> When extending varchar column size via alter table drop column / alter table 
> add column, the changes to column size do not persist after cluster restart, 
> while added data does (in a supposedly undersized columns).
> Dropping the table and then recreating it via new DDL with extended varchar 
> column works, and can be used as a workaround.
>  
> Now, we've found the notice regarding a certain limitation when using alter 
> table, specifically this:
> {code:java}
> The command does not remove actual data from the cluster which means that if 
> the column 'name' is dropped, the value of the 'name' is still stored in the 
> cluster. This limitation is to be addressed in the next releases. {code}
> but there is *no warning or failure* of any kind that one would expect from 
> supposed collision with already stored data - such as when changing the 
> column type, which produces "SQL Error [3009] [42000]: Column already exists: 
> with a different type"
>  
> Is this caused by the same issue / limitation?
>  
> Either way, it can still cause a nasty surprise when the cluster is restarted 
> because of a lack of a warning / error of any kind.
>  
> If it can't be fixed yet, can at least some kind of warning / error be added 
> when attempting to change the column size?
>  
> How to reproduce:
> {code:java}
> CREATE TABLE DEMO_TABLE
> (
> id                      INT,     
> contentious_column      VARCHAR(8), 
> PRIMARY KEY (id)
> ) WITH "CACHE_NAME=DEMO_TABLE, DATA_REGION=PersistDataRegion, 
> TEMPLATE=REPLICATED, BACKUPS=1";
> -- goes through, under 8 chars
> INSERT INTO DEMO_TABLE (id, contentious_column) VALUES(1, '1234');
> INSERT INTO DEMO_TABLE (id, contentious_column) VALUES(2, '12345678');
> -- fails, contentious_column value is too long
> INSERT INTO DEMO_TABLE (id, contentious_column) VALUES(3, 
> '123456789101112131415');
> SELECT contentious_column FROM PUBLIC.DEMO_TABLE;
> -- change column size from 8 to 30
> ALTER TABLE PUBLIC.DEMO_TABLE
>     DROP COLUMN contentious_column;
> ALTER TABLE PUBLIC.DEMO_TABLE
>     ADD COLUMN (contentious_column varchar(30));
> -- now the contentious_column is enlarged and can accept previously 
> unacceptable value
> INSERT INTO DEMO_TABLE (id, contentious_column) VALUES(3, 
> '123456789101112131415');
> -- restart node / cluster via systemctl restart 
> [email protected]
> -- after restart, value can still be accessed
> SELECT contentious_column FROM PUBLIC.DEMO_TABLE WHERE id=3;
> -- but you can't insert new values over length 8 - the column size has been 
> shrunk back to its original value
> INSERT INTO DEMO_TABLE (id, contentious_column) VALUES(4, '123456789');
> -- back to length 8, goes through
> INSERT INTO DEMO_TABLE (id, contentious_column) VALUES(4, '12345678'); {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to