[ 
https://issues.apache.org/jira/browse/IGNITE-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-17041:
------------------------------------
    Description: 
Let's consider the following situaiton:

The first server node is started with the following cache configuration.

{code:java}
QueryEntity qryEntity = new QueryEntity(String.class, Person.class)
            .setTableName("PERSON")
            .addQueryField("id", Integer.class.getName(), null);

        CacheConfiguration<?, ?> ccfg = new 
CacheConfiguration<>(DEFAULT_CACHE_NAME)
            .setQueryEntities(Collections.singletonList(qryEntity));
{code}

Then we create the column manually like that:

{code:java}
 srv.cache(DEFAULT_CACHE_NAME).query(new SqlFieldsQuery("ALTER TABLE PERSON ADD 
data INTEGER")).getAll();
{code}

As a result we get two query entities with the following aliases:

"id" -> "ID" (Note that isSqlEscapeAll flag is disabled in the cache 
configuraiton so the Ignite automatically creates upper-case aliases for 
preconfigured entities)
"DATA" -> "DATA"

Then lets start second server node with the following cache confiuration:

{code:java}
    QueryEntity qryEntity = new QueryEntity(String.class, Person.class)
            .setTableName("PERSON")
            .addQueryField("id", Integer.class.getName(), null)
            .addQueryField("data", Boolean.class.getName(), null);

      CacheConfiguration<?, ?> ccfg = new 
CacheConfiguration<>(DEFAULT_CACHE_NAME)
            .setQueryEntities(Collections.singletonList(qryEntity));
{code}

Note the type of "data" query field is Boolean now.

The mentioned above configuration will be stored as two query entities 
"id" -> "ID"
"data" -> "DATA"
Note that isSqlEscapeAll flag is disabled in the cache configuraiton so the 
Ignite automatically creates upper-case aliases for preconfigured entities and 
uses them as column names.

During the join process of the second node first node validates that cache 
query fields configuration from the second node is identical with the local 
one. If there are unique fields that are not present in the first node's config 
then configurations from both nodes are merged to complement each other.  If 
there are some conflicts (e.g. the same field has diferent types for joining 
and "in cluster" nodes) joining node will be rejected.

In the described above example second node joins the cluster with no conflicts.
Merge process completes succcessflully and we get two data columns - "data" and 
"DATA". 
Since isSqlEscapeAll flag is disabled for both configuration it's confusing 
behaviour.
The expected behaviour is that the second node wuld be rejected.



  was:
It is needed to to normalize query entity after it is modified during MERGE 
process as it is done during the first cache configuration processing. 
Currently new table columns that was created based on Query Entity fields which 
was added during MERGE process has the naming that differs from columns that 
were created based on initial Query Entity fields.

For example if CacheConfiguration#isSqlEscapeAll flag is disabled - all 
QueryEntity fields are converted to upper case and used as such to name 
columns. But it does not happen if Query Entity field was added during MERGE 
process. It confuses users and leads to the situations when column conflicts 
cannot be found because column names are different.

Reproducer:


{code:java}
public class TestClass extends GridCommonAbstractTest {
    /**
     * Start cluster nodes.
     */
    public static final int NODES_CNT = 2;

    /**
     * Count of backup partitions.
     */
    public static final int BACKUPS = 2;


    @Override
    protected IgniteConfiguration getConfiguration(String igniteInstanceName) 
throws Exception {
        QueryEntity queryEntity = new QueryEntity(String.class, Person.class)
                .setTableName("PERSON")
                .addQueryField("id", Integer.class.getName(), null)
                .addQueryField("name", String.class.getName(), null);

        CacheConfiguration<?, ?> configuration = new 
CacheConfiguration<>(GridAbstractTest.DEFAULT_CACHE_NAME)
            .setBackups(BACKUPS)
            .setQueryEntities(Collections.singletonList(queryEntity));

        if (igniteInstanceName.endsWith("1"))
            queryEntity.addQueryField("age", Boolean.class.getName(), null);

        IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName)
                .setConsistentId(igniteInstanceName)
                .setDataStorageConfiguration(new DataStorageConfiguration()
                        .setDefaultDataRegionConfiguration(new 
DataRegionConfiguration()))
                .setCacheConfiguration(
                    configuration);

        return cfg;
    }


    /**
     * {@inheritDoc}
     */
    @Override
    protected void afterTest() throws Exception {
        stopAllGrids();
    }

    /**
     *
     */
    @Test
    public void testIssue() throws Exception {
        startGrid(0);

        grid(0);

        grid(0).cache(GridAbstractTest.DEFAULT_CACHE_NAME).query(new 
SqlFieldsQuery("ALTER TABLE PERSON ADD age INTEGER")).getAll();

        GridTestUtils.assertThrows(log, () -> startGrid(1), Exception.class, 
"");

        grid(0).cluster().state(ClusterState.INACTIVE);

        startGrid(1);

        grid(0).cluster().state(ClusterState.ACTIVE);

        System.out.println(grid(0).cache(DEFAULT_CACHE_NAME).query(new 
SqlFieldsQuery("select * from \"SYS\".TABLE_COLUMNS"))
            .getAll());

        grid(0).cache(DEFAULT_CACHE_NAME).query(new SqlFieldsQuery("INSERT INTO 
PERSON(_key, id, name, AGE) VALUES(0, 2, '123', 2)"))
 ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣.getAll();
    }

    class Person {
        private int id;

        private String name;

        private int age;
    }
}
{code}

As a result we can see that "age" column is duplicated in UPPER and LOWER case. 
And no conflicts were found.

It also causes problems if insert and select is done after that like so


{code:java}
        grid(0).cache(DEFAULT_CACHE_NAME).query(new SqlFieldsQuery("INSERT INTO 
PERSON(_key, id, name, age) VALUES(0, 2, '123', 2)"))
            .getAll();

        grid(0).cache(DEFAULT_CACHE_NAME).query(new SqlFieldsQuery("SELECT * 
FROM PERSON"))
            .getAll();
{code}

The following exception is raised during SELECT query execution 


{code:java}
javax.cache.CacheException: Failed to execute map query on remote node 
[nodeId=8a215fea-a005-47d2-aaee-aa1736900000, errMsg=Failed to wrap object into 
H2 Value. java.lang.Integer cannot be cast to java.lang.Boolean]

        at 
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.fail(GridReduceQueryExecutor.java:239)
        at 
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onFail(GridReduceQueryExecutor.java:218)
        at 
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.sendError(GridMapQueryExecutor.java:778)
        at 
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0(GridMapQueryExecutor.java:585)
        at 
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest(GridMapQueryExecutor.java:284)
        at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage(IgniteH2Indexing.java:2237)
        at 
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor$1.applyx(GridReduceQueryExecutor.java:157)
        at 
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor$1.applyx(GridReduceQueryExecutor.java:152)
        at 
org.apache.ignite.internal.util.lang.IgniteInClosure2X.apply(IgniteInClosure2X.java:38)
        at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.send(IgniteH2Indexing.java:2362)
        at 
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.send(GridReduceQueryExecutor.java:1201)
        at 
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:463)
        at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$7.iterator(IgniteH2Indexing.java:1848)
        at 
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iter(QueryCursorImpl.java:102)
        at 
org.apache.ignite.internal.processors.cache.query.RegisteredQueryCursor.iter(RegisteredQueryCursor.java:91)
        at 
org.apache.ignite.internal.processors.cache.QueryCursorImpl.getAll(QueryCursorImpl.java:124)
        at org.apache.ignite.issues.TestClass.testIssue(TestClass.java:117)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2434)
        at java.lang.Thread.run(Thread.java:748)
{code}

 
Actually it is not a big deal since the extra column can be explicitly deleted 
and everything works fine then.

    Environment: 




> Query fields merge process does not consider aliases to determine conflicts.
> ----------------------------------------------------------------------------
>
>                 Key: IGNITE-17041
>                 URL: https://issues.apache.org/jira/browse/IGNITE-17041
>             Project: Ignite
>          Issue Type: Bug
>         Environment: 
>            Reporter: Mikhail Petrov
>            Priority: Major
>              Labels: ise
>
> Let's consider the following situaiton:
> The first server node is started with the following cache configuration.
> {code:java}
> QueryEntity qryEntity = new QueryEntity(String.class, Person.class)
>             .setTableName("PERSON")
>             .addQueryField("id", Integer.class.getName(), null);
>         CacheConfiguration<?, ?> ccfg = new 
> CacheConfiguration<>(DEFAULT_CACHE_NAME)
>             .setQueryEntities(Collections.singletonList(qryEntity));
> {code}
> Then we create the column manually like that:
> {code:java}
>  srv.cache(DEFAULT_CACHE_NAME).query(new SqlFieldsQuery("ALTER TABLE PERSON 
> ADD data INTEGER")).getAll();
> {code}
> As a result we get two query entities with the following aliases:
> "id" -> "ID" (Note that isSqlEscapeAll flag is disabled in the cache 
> configuraiton so the Ignite automatically creates upper-case aliases for 
> preconfigured entities)
> "DATA" -> "DATA"
> Then lets start second server node with the following cache confiuration:
> {code:java}
>     QueryEntity qryEntity = new QueryEntity(String.class, Person.class)
>             .setTableName("PERSON")
>             .addQueryField("id", Integer.class.getName(), null)
>             .addQueryField("data", Boolean.class.getName(), null);
>       CacheConfiguration<?, ?> ccfg = new 
> CacheConfiguration<>(DEFAULT_CACHE_NAME)
>             .setQueryEntities(Collections.singletonList(qryEntity));
> {code}
> Note the type of "data" query field is Boolean now.
> The mentioned above configuration will be stored as two query entities 
> "id" -> "ID"
> "data" -> "DATA"
> Note that isSqlEscapeAll flag is disabled in the cache configuraiton so the 
> Ignite automatically creates upper-case aliases for preconfigured entities 
> and uses them as column names.
> During the join process of the second node first node validates that cache 
> query fields configuration from the second node is identical with the local 
> one. If there are unique fields that are not present in the first node's 
> config then configurations from both nodes are merged to complement each 
> other.  If there are some conflicts (e.g. the same field has diferent types 
> for joining and "in cluster" nodes) joining node will be rejected.
> In the described above example second node joins the cluster with no 
> conflicts.
> Merge process completes succcessflully and we get two data columns - "data" 
> and "DATA". 
> Since isSqlEscapeAll flag is disabled for both configuration it's confusing 
> behaviour.
> The expected behaviour is that the second node wuld be rejected.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to