[jira] [Assigned] (IGNITE-13142) SQL constraint not null added on key prevents correct inserts

2020-06-26 Thread Sergey Kalashnikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov reassigned IGNITE-13142:
---

Assignee: Sergey Kalashnikov

> SQL constraint not null added on key prevents correct inserts
> -
>
> Key: IGNITE-13142
> URL: https://issues.apache.org/jira/browse/IGNITE-13142
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
>Priority: Major
> Attachments: SqlNotNullKeyFielfTest.java
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It is possible to configure {{QueryEntity}} so that subsequent inserts would 
> fail.
> - lowercase {{keyFieldName}}
> - not null constraint on {{keyFieldName}}
> {code:java}
> new QueryEntity()
> .setTableName("Person")
> .setKeyFieldName("id")
> .setKeyType("java.lang.Integer")
> .setValueType("Person")
> .setFields(new LinkedHashMap<>(
> F.asMap("id", "java.lang.Integer",
> "name", "java.lang.String",
> "age", "java.lang.Integer")))
> .setNotNullFields(F.asSet("id", "name", "age"
> {code}
> The following SQL produces error: Null value is not allowed for column 'ID'
> {code}
> insert into Person (id, name, age) values (1, 'John Doe', 30)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13142) SQL constraint not null added on key prevents correct inserts

2020-06-10 Thread Sergey Kalashnikov (Jira)
Sergey Kalashnikov created IGNITE-13142:
---

 Summary: SQL constraint not null added on key prevents correct 
inserts
 Key: IGNITE-13142
 URL: https://issues.apache.org/jira/browse/IGNITE-13142
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Kalashnikov
 Attachments: SqlNotNullKeyFielfTest.java

It is possible to configure {{QueryEntity}} so that subsequent inserts would 
fail.
- lowercase {{keyFieldName}}
- not null constraint on {{keyFieldName}}


{code:java}
new QueryEntity()
.setTableName("Person")
.setKeyFieldName("id")
.setKeyType("java.lang.Integer")
.setValueType("Person")
.setFields(new LinkedHashMap<>(
F.asMap("id", "java.lang.Integer",
"name", "java.lang.String",
"age", "java.lang.Integer")))
.setNotNullFields(F.asSet("id", "name", "age"
{code}

The following SQL produces error: Null value is not allowed for column 'ID'

{code}
insert into Person (id, name, age) values (1, 'John Doe', 30)
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13142) SQL constraint not null added on key prevents correct inserts

2020-06-10 Thread Sergey Kalashnikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-13142:

Attachment: SqlNotNullKeyFielfTest.java

> SQL constraint not null added on key prevents correct inserts
> -
>
> Key: IGNITE-13142
> URL: https://issues.apache.org/jira/browse/IGNITE-13142
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kalashnikov
>Priority: Major
> Attachments: SqlNotNullKeyFielfTest.java
>
>
> It is possible to configure {{QueryEntity}} so that subsequent inserts would 
> fail.
> - lowercase {{keyFieldName}}
> - not null constraint on {{keyFieldName}}
> {code:java}
> new QueryEntity()
> .setTableName("Person")
> .setKeyFieldName("id")
> .setKeyType("java.lang.Integer")
> .setValueType("Person")
> .setFields(new LinkedHashMap<>(
> F.asMap("id", "java.lang.Integer",
> "name", "java.lang.String",
> "age", "java.lang.Integer")))
> .setNotNullFields(F.asSet("id", "name", "age"
> {code}
> The following SQL produces error: Null value is not allowed for column 'ID'
> {code}
> insert into Person (id, name, age) values (1, 'John Doe', 30)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13110) An option to validate field types against SQL schema on key-value insert

2020-06-03 Thread Sergey Kalashnikov (Jira)
Sergey Kalashnikov created IGNITE-13110:
---

 Summary: An option to validate field types against SQL schema on 
key-value insert
 Key: IGNITE-13110
 URL: https://issues.apache.org/jira/browse/IGNITE-13110
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Kalashnikov
Assignee: Sergey Kalashnikov


Let's add a configurable option to prevent insertion of key-value pairs that 
aren't compatible with SQL schema.

An option can be added on {{SqlConfiguration}} level or even per 
{{QueryEntity}}.

The checks can be performed within the existing 
{{GridQueryTypeDescriptor#validateKeyAndValue}} facility that seems to be well 
suited for this task.

This addition will prevent the problems when values successfully added to the 
cache later produce errors when queried with SQL.

See discussion : 
http://apache-ignite-developers.2346864.n4.nabble.com/Prevent-insertion-of-cache-entry-if-the-binary-field-type-and-the-type-of-the-query-entity-do-not-ma-td47678.html







--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12933) Node failed after put incorrect key class for indexed type to transactional cache

2020-04-27 Thread Sergey Kalashnikov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17093175#comment-17093175
 ] 

Sergey Kalashnikov commented on IGNITE-12933:
-

[~alex_pl], I've reviewed your changes, LGTM.

> Node failed after put incorrect key class for indexed type to transactional 
> cache
> -
>
> Key: IGNITE-12933
> URL: https://issues.apache.org/jira/browse/IGNITE-12933
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Node failed after put incorrect key class for indexed type to the 
> transactional cache when indexing is enabled.
> Reproducer:
> {code:java}
> public class IndexedTypesTest extends GridCommonAbstractTest {
> private boolean failed;
> @Override protected IgniteConfiguration getConfiguration(String 
> igniteInstanceName) throws Exception {
> return super.getConfiguration(igniteInstanceName)
> .setFailureHandler((ignite, ctx) -> failed = true)
> .setCacheConfiguration(new 
> CacheConfiguration<>(DEFAULT_CACHE_NAME)
> .setAtomicityMode(TRANSACTIONAL)
> .setIndexedTypes(String.class, String.class));
> }
> @Test
> public void testPutIndexedType() throws Exception {
> Ignite ignite = startGrids(2);
> for (int i = 0; i < 10; i++) {
> try {
> ignite.cache(DEFAULT_CACHE_NAME).put(i, "val" + i);
> }
> catch (Exception ignore) {
> }
> }
> assertFalse(failed);
> }
> }
> {code}
> Node failed with exception:
> {noformat}
> [2020-04-22 
> 17:05:34,524][ERROR][sys-stripe-11-#76%cache.IndexedTypesTest1%][IgniteTestResources]
>  Critical system error detected. Will be handled accordingly to configured 
> handler 
> [hnd=o.a.i.i.processors.cache.IndexedTypesTest$$Lambda$115/0x00080024d040@147237db,
>  failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
> o.a.i.i.transactions.IgniteTxHeuristicCheckedException: Committing a 
> transaction has produced runtime exception]]
> class 
> org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException: 
> Committing a transaction has produced runtime exception
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.heuristicException(IgniteTxAdapter.java:800)
> at 
> org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.commitIfLocked(GridDistributedTxRemoteAdapter.java:838)
> at 
> org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.commitRemoteTx(GridDistributedTxRemoteAdapter.java:893)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.finish(IgniteTxHandler.java:1502)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processDhtTxPrepareRequest(IgniteTxHandler.java:1233)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$5.apply(IgniteTxHandler.java:229)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$5.apply(IgniteTxHandler.java:227)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1142)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:591)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:392)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:318)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:308)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1847)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1472)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1367)
> at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:565)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to update 
> index, incorrect key class [expCls=java.lang.String, 
> actualCls=java.lang.Integer]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.typeByValue(GridQueryProcessor.java:2223)
> at 
> 

[jira] [Assigned] (IGNITE-11923) [IEP-35] Migrate IgniteMXBean

2019-11-25 Thread Sergey Kalashnikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov reassigned IGNITE-11923:
---

Assignee: Sergey Kalashnikov

> [IEP-35] Migrate IgniteMXBean
> -
>
> Key: IGNITE-11923
> URL: https://issues.apache.org/jira/browse/IGNITE-11923
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Sergey Kalashnikov
>Priority: Major
>  Labels: IEP-35, await
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After merging of IGNITE-11848 we should migrate `IgniteMXBean` to the new 
> metric framework.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-11075) Index rebuild procedure over cache partition file

2019-11-25 Thread Sergey Kalashnikov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16981517#comment-16981517
 ] 

Sergey Kalashnikov edited comment on IGNITE-11075 at 11/25/19 12:41 PM:


Implemented the following solution (PR 
[https://github.com/apache/ignite/pull/7070):]

Goals:
 - Restart failed attempts to rebuild the indexes (due to node crash).
 - Minimize the scope of recovery rebuilds to those caches and partitions that 
have not been able to complete the rebuild before the crash.
 - Provide ability (and API) to rebuild arbitrarily selected partitions of 
cache indexes.

Design:

1) A set of partition rebuild markers is kept inside {{index.bin}} file (i.e. 
persisted).
 For that purpose, the new {{"IndexRebuildMarkers"}} tree is introduced.

Item size for shared cache group: 6 bytes (4 for cacheId and 2 for partition)
 Item size for single cache: 2 bytes (just partition)

So, for a node with 2000 local partitions: it takes 6(shared-group) or 
1(single-cache) additional page(s) per cache.
 For an extreme case of 65500 local partitions per node: it takes 194 or 64 
pages per cache.
 However, this tree is normally empty (only requires 1 page) and only takes 
space when the index rebuild is in progress.

2) Before the index rebuild start:
 - Store the partition ids that will be rebuilt into {{index.bin}}.
 - Log a new WAL record {{START_BUILD_INDEX_RECORD}} to protect the new 
information from the crash before the first checkpoint.

3) After successful completion of each partition rebuild:
 - Remove the partition id from the \{{"IndexRebuildMarkers"}} tree.

4) On memory recovery:
 - If during logical records recovery we happen to meet 
{{START_BUILD_INDEX_RECORD}}, store partitions from the record into the 
{{index.bin}} unless the file was removed.

5) On cache start:
 - Check if {{index.bin}} exists for a cache-group and then retrieve partition 
build markers from the {{"IndexRebuildMarkers"}} tree.
 - Start index-rebuild for the marked partitions.

6) New API is provided for use by P2P rebalance:

{{public IgniteInternalFuture rebuildIndexesByPartition(CacheGroupContext 
grp, int partId);}}


was (Author: skalashnikov):
Implemented the following solution (PR 
https://github.com/apache/ignite/pull/7070):

Goals:
- Restart failed attempts to rebuild the indexes (due to node crash).
- Minimize the scope of recovery rebuilds to those caches and partitions that 
have not been able to complete the rebuild before the crash.
- Provide ability (and API) to rebuild arbitrarily selected partitions of cache 
indexes.

Design:

1) A set of partition rebuild markers is kept inside {{index.bin}} file (i.e. 
persisted).
For that purpose, the new {{"IndexRebuildMarkers"}} tree is introduced.

Item size for shared cache group: 6 bytes (4 for cacheId and 2 for partition)
Item size for single cache: 2 bytes (just partition)

So, for a node with 2000 local partitions: it takes 6(shared-group) or 
1(single-cache) additional page(s) per cache.
For an extreme case of 65500 local partitions per node: it takes 194 or 64 
pages per cache.
However, this tree is normally empty (only requires 1 page) and only takes 
space when the index rebuild is in progress.

2) Before the index rebuild start:
- Store the partition ids that will be rebuilt into {{index.bin}}.
- Log a new WAL record {{START_BUILD_INDEX_RECORD}} to protect the new 
information from the crash before the first checkpoint.

3) After successful completion of each partition rebuild:
- Remove the partition id from the {{"IndexRebuildMarkers"}}tree.

4) On memory recovery:
- If during logical records recovery we happen to meet 
{{START_BUILD_INDEX_RECORD}}, store partitions from the record into the 
{{index.bin}} unless the file was removed.

5) On cache start:
- Check if {{index.bin}} exists for a cache-group and then retrieve partition 
build markers from the {{"IndexRebuildMarkers"}} tree.
- Start index-rebuild for the marked partitions.

6) New API is provided for use by P2P rebalance:

{{public IgniteInternalFuture rebuildIndexesByPartition(CacheGroupContext 
grp, int partId);}}


> Index rebuild procedure over cache partition file
> -
>
> Key: IGNITE-11075
> URL: https://issues.apache.org/jira/browse/IGNITE-11075
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Maxim Muzafarov
>Assignee: Sergey Kalashnikov
>Priority: Major
>  Labels: iep-28
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The node can own partition when partition data is rebalanced and cache 
> indexes are ready. For the message-based cluster rebalancing, approach 
> indexes are rebuilding simultaneously with cache data loading. For the 
> file-based rebalancing approach, the index rebuild 

[jira] [Commented] (IGNITE-11075) Index rebuild procedure over cache partition file

2019-11-25 Thread Sergey Kalashnikov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16981517#comment-16981517
 ] 

Sergey Kalashnikov commented on IGNITE-11075:
-

Implemented the following solution (PR 
https://github.com/apache/ignite/pull/7070):

Goals:
- Restart failed attempts to rebuild the indexes (due to node crash).
- Minimize the scope of recovery rebuilds to those caches and partitions that 
have not been able to complete the rebuild before the crash.
- Provide ability (and API) to rebuild arbitrarily selected partitions of cache 
indexes.

Design:

1) A set of partition rebuild markers is kept inside {{index.bin}} file (i.e. 
persisted).
For that purpose, the new {{"IndexRebuildMarkers"}} tree is introduced.

Item size for shared cache group: 6 bytes (4 for cacheId and 2 for partition)
Item size for single cache: 2 bytes (just partition)

So, for a node with 2000 local partitions: it takes 6(shared-group) or 
1(single-cache) additional page(s) per cache.
For an extreme case of 65500 local partitions per node: it takes 194 or 64 
pages per cache.
However, this tree is normally empty (only requires 1 page) and only takes 
space when the index rebuild is in progress.

2) Before the index rebuild start:
- Store the partition ids that will be rebuilt into {{index.bin}}.
- Log a new WAL record {{START_BUILD_INDEX_RECORD}} to protect the new 
information from the crash before the first checkpoint.

3) After successful completion of each partition rebuild:
- Remove the partition id from the {{"IndexRebuildMarkers"}}tree.

4) On memory recovery:
- If during logical records recovery we happen to meet 
{{START_BUILD_INDEX_RECORD}}, store partitions from the record into the 
{{index.bin}} unless the file was removed.

5) On cache start:
- Check if {{index.bin}} exists for a cache-group and then retrieve partition 
build markers from the {{"IndexRebuildMarkers"}} tree.
- Start index-rebuild for the marked partitions.

6) New API is provided for use by P2P rebalance:

{{public IgniteInternalFuture rebuildIndexesByPartition(CacheGroupContext 
grp, int partId);}}


> Index rebuild procedure over cache partition file
> -
>
> Key: IGNITE-11075
> URL: https://issues.apache.org/jira/browse/IGNITE-11075
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Maxim Muzafarov
>Assignee: Sergey Kalashnikov
>Priority: Major
>  Labels: iep-28
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The node can own partition when partition data is rebalanced and cache 
> indexes are ready. For the message-based cluster rebalancing, approach 
> indexes are rebuilding simultaneously with cache data loading. For the 
> file-based rebalancing approach, the index rebuild procedure must be finished 
> before the partition state is set to the OWNING state. 
> We need to rebuild local SQL indexes (the {{index.bin}} file) when partition 
> file has been received. Crash-recovery guarantees must be supported by a node 
> since index-rebuild performs on the node in the topology.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12295) Faster index eviction

2019-10-16 Thread Sergey Kalashnikov (Jira)
Sergey Kalashnikov created IGNITE-12295:
---

 Summary: Faster index eviction
 Key: IGNITE-12295
 URL: https://issues.apache.org/jira/browse/IGNITE-12295
 Project: Ignite
  Issue Type: Sub-task
Reporter: Sergey Kalashnikov
Assignee: Sergey Kalashnikov


For the file-based rebalancing approach, it seems feasible to avoid iterating 
the old partition data in order to clear the indexes.
One can independently clear the shared index structures of all the rows 
referencing entries from moving partitions by deducing partition id from the 
links in the leaf pages.

The proposed algorithm is simple and takes the set of integer partition ids as 
an input:
1. Iterate over leaf pages of the index and remove items attributed to any of 
indicated partitions, unless it is the only or the rightmost item on a page.
2. If the rightmost item (or the only item) on a page happens to belong to any 
of the indicated partitions, employ a regular remove algorithm (descending from 
the root) so that inner pages get correctly updated.
Restart iteration from the leaf page where the removed item would be inserted 
(descend from the root to find it).

The use of such algorithm can be justified (as having performance advantage) 
when the number of keys that'd be removed is bigger than the number of leaf 
pages in the index.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-11075) Index rebuild procedure over cache partition file

2019-08-14 Thread Sergey Kalashnikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov reassigned IGNITE-11075:
---

Assignee: Sergey Kalashnikov

> Index rebuild procedure over cache partition file
> -
>
> Key: IGNITE-11075
> URL: https://issues.apache.org/jira/browse/IGNITE-11075
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Maxim Muzafarov
>Assignee: Sergey Kalashnikov
>Priority: Major
>  Labels: iep-28
>
> The node can own partition when partition data is rebalanced and cache 
> indexes are ready. For the message-based cluster rebalancing, approach 
> indexes are rebuilding simultaneously with cache data loading. For the 
> file-based rebalancing approach, the index rebuild procedure must be finished 
> before the partition state is set to the OWNING state. 
> We need to rebuild local SQL indexes (the {{index.bin}} file) when partition 
> file has been received. Crash-recovery guarantees must be supported by a node 
> since index-rebuild performs on the node in the topology.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-8495) CPP Thin: Implement thin client start and connection establishment

2018-06-22 Thread Sergey Kalashnikov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520318#comment-16520318
 ] 

Sergey Kalashnikov commented on IGNITE-8495:


[~isapego], I have left a couple of comments in the upsource. Please see.

> CPP Thin: Implement thin client start and connection establishment
> --
>
> Key: IGNITE-8495
> URL: https://issues.apache.org/jira/browse/IGNITE-8495
> Project: Ignite
>  Issue Type: Sub-task
>  Components: platforms
>Affects Versions: 2.4
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
> Fix For: 2.6
>
>
> Need to implement basic functionality for C++ thin client - configuration, 
> starting, connection to server, handshake.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8838) Query cursor is open after INSERT call

2018-06-20 Thread Sergey Kalashnikov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518359#comment-16518359
 ] 

Sergey Kalashnikov commented on IGNITE-8838:


[~isapego], looks good to me.

> Query cursor is open after INSERT call 
> ---
>
> Key: IGNITE-8838
> URL: https://issues.apache.org/jira/browse/IGNITE-8838
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc, platforms, sql
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Assignee: Igor Sapego
>Priority: Major
>  Labels: cpp
> Fix For: 2.6
>
>
> Ignite ODBC driver returns open cursor for an insert command.
> {code}
> AddStatusRecord: Adding new record: Query cursor is in open state already., 
> rowNum: 0, columnNum: 0
>  SQLGetDiagField: SQLGetDiagField called: 1
>  PutString: value: HY010
>  SQLGetDiagField: SQLGetDiagField called: 2
>  SQLGetDiagRec: SQLGetDiagRec called
>  SQLGetDiagRec: SQLGetDiagRec called
>  SQLGetDiagRec: SQLGetDiagRec called
>  SQLParamOptions: SQLParamOptions called
>  SQLBindParameter: SQLBindParameter called: 1, 1, 12
>  SQLBindParameter: SQLBindParameter called: 2, 1, 12
>  SQLBindParameter: SQLBindParameter called: 3, 1, 12
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8371) MVCC TX: Force key request during rebalance may cause error on backups.

2018-06-20 Thread Sergey Kalashnikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov reassigned IGNITE-8371:
--

Assignee: Sergey Kalashnikov

> MVCC TX: Force key request during rebalance may cause error on backups.
> ---
>
> Key: IGNITE-8371
> URL: https://issues.apache.org/jira/browse/IGNITE-8371
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Roman Kondakov
>Assignee: Sergey Kalashnikov
>Priority: Major
>  Labels: mvcc
>
> When backup is updating during rebalance and the key to be updated in TX is 
> not supplied yet from the previous partition owner, backup makes force key 
> request in order to obtain this key and all its versions. But later this key 
> can be send to this backup from the previous owner once again as a part of 
> standard rebalance process. And this causes write conflict: we have to write 
> this key on the backup once again.
> Solution: do not update key when it has already been written before (during 
> rebalance or force key request process).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8764) Informatica can not connect to a cluster using ODBC driver on Windows

2018-06-09 Thread Sergey Kalashnikov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16507028#comment-16507028
 ] 

Sergey Kalashnikov commented on IGNITE-8764:


[~isapego], I'm OK with the changes.

> Informatica can not connect to a cluster using ODBC driver on Windows
> -
>
> Key: IGNITE-8764
> URL: https://issues.apache.org/jira/browse/IGNITE-8764
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.5
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: odbc
> Fix For: 2.6
>
>
> It crashes or returns garbage on attempt to connect to a server node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8702) Crash in ODBC driver under Informatica connection checker

2018-06-07 Thread Sergey Kalashnikov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504760#comment-16504760
 ] 

Sergey Kalashnikov commented on IGNITE-8702:


[~isapego], Looks good to me.

> Crash in ODBC driver under Informatica connection checker
> -
>
> Key: IGNITE-8702
> URL: https://issues.apache.org/jira/browse/IGNITE-8702
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.4
>Reporter: Ilya Kasnacheev
>Assignee: Igor Sapego
>Priority: Major
>
> I'm trying to connect Informatica to Ignite via ODBC.
> When I try to specify my connection as a ready-made DSN by its name, it 
> starts connecting to remote but then fails:
> {code}
> [ikasnacheev@lab15 ODBC7.1]$ IGNITE_ODBC_LOG_PATH=/home/ikasnacheev/odbc2.log 
> INFA_HOME=/storage/ssd/ikasnacheev 
> LD_LIBRARY_PATH=/storage/ssd/ikasnacheev/ODBC7.1/lib:$LD_LIBRARY_PATH:/storage/ssd/ikasnacheev/services/shared/bin
>  /storage/ssd/ikasnacheev/java/jre/bin/java -d64 -DpwdDecrypt=true 
> -DconnectionName=Lab -DuserName=lab -Dpassword="nq/Jypc7Q2EhoQ2iAQlOCA==" 
> -DconnectionString=LABignite -DdataStoreType=ODBC 
> -DINFA_HOME=/storage/ssd/ikasnacheev -classpath 
> '.:/storage/ssd/ikasnacheev/services/AdministratorConsole/webapps/administrator/WEB-INF/lib/*:/storage/ssd/ikasnacheev/services/shared/jars/platform/*:/storage/ssd/ikasnacheev/services/shared/jars/thirdparty/*:/storage/ssd/ikasnacheev/plugins/osgi/*:/storage/ssd/ikasnacheev/plugins/infa/*:/storage/ssd/ikasnacheev/plugins/dynamic/*'
>  com.informatica.adminconsole.app.chain.commands.TestODBCConnection
> ...
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7faeb806d5e4, pid=26471, tid=140392269498112
> #
> # JRE version: Java(TM) SE Runtime Environment (8.0_77-b03) (build 
> 1.8.0_77-b03)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.77-b03 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libignite-odbc.so+0x2c5e4]  
> ignite::odbc::system::TcpSocketClient::Connect(char const*, unsigned short, 
> int, ignite::odbc::diagnostic::Diagnosable&)+0x7b4
> {code}
> The contents of Ignite driver log file as follows:
> {code}
> SQLAllocEnv: SQLAllocEnv called
> SQLSetEnvAttr: SQLSetEnvAttr called
> AddStatusRecord: Adding new record: ODBC version is not supported., rowNum: 
> 0, columnNum: 0
> SQLAllocConnect: SQLAllocConnect called
> SQLGetInfo: SQLGetInfo called: 77 (SQL_DRIVER_ODBC_VER), 7faec08d1450, 6, 
> 7faf9f5a29ee
> GetInfo: SQLGetInfo called: 77 (SQL_DRIVER_ODBC_VER), 7faec08d1450, 6, 
> 7faf9f5a29ee
> SQLSetConnectOption: SQLSetConnectOption called
> SQLConnect: SQLConnect called
> SQLConnect: DSN: LABignite
> Connect: Host: 172.25.1.16, port: 10800
> Connect: Addr: 172.25.1.16
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8582) MVCC TX: Cache store read-through support

2018-05-23 Thread Sergey Kalashnikov (JIRA)
Sergey Kalashnikov created IGNITE-8582:
--

 Summary: MVCC TX: Cache store read-through support
 Key: IGNITE-8582
 URL: https://issues.apache.org/jira/browse/IGNITE-8582
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Sergey Kalashnikov


Add support for read-through cache store for mvcc caches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8581) MVCC TX: data streamer support

2018-05-23 Thread Sergey Kalashnikov (JIRA)
Sergey Kalashnikov created IGNITE-8581:
--

 Summary: MVCC TX: data streamer support
 Key: IGNITE-8581
 URL: https://issues.apache.org/jira/browse/IGNITE-8581
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Sergey Kalashnikov


Add support for data streamer for mvcc caches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8394) ODBC: Can not establish SSL connection to remote host.

2018-04-26 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454514#comment-16454514
 ] 

Sergey Kalashnikov commented on IGNITE-8394:


[~isapego], I have reviewed the changes. Generally it looks good, but I have 
concerns regarding the function {{SecureSocketClient::AsyncConnectInternal()}}.
1. The name of the function is confusing since it doesn't actually return 
control until connection is established or failure happens.
2. When it does return {{false}}, the call to {{ssl::SSL_free(sslIO)}} is 
missing.

> ODBC: Can not establish SSL connection to remote host.
> --
>
> Key: IGNITE-8394
> URL: https://issues.apache.org/jira/browse/IGNITE-8394
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.4
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: odbc, ssl, tls
> Fix For: 2.5
>
>
> Driver connects to the local server, but when connecting to remote server 
> client sometimes returns error when trying to establish async connection, 
> though the connection established successfully, if the error is ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-7186) SQL TX: Replicated caches support

2018-04-26 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov reassigned IGNITE-7186:
--

Assignee: Sergey Kalashnikov

> SQL TX: Replicated caches support
> -
>
> Key: IGNITE-7186
> URL: https://issues.apache.org/jira/browse/IGNITE-7186
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Igor Seliverstov
>Assignee: Sergey Kalashnikov
>Priority: Major
>  Labels: iep-3, sql
> Fix For: 2.6
>
>
> Need to implement query execution and update on a near node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8149) MVCC TX Size method should use tx snapshot

2018-04-20 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov reassigned IGNITE-8149:
--

Assignee: Sergey Kalashnikov

> MVCC TX Size method should use tx snapshot
> --
>
> Key: IGNITE-8149
> URL: https://issues.apache.org/jira/browse/IGNITE-8149
> Project: Ignite
>  Issue Type: Task
>  Components: cache
>Reporter: Igor Seliverstov
>Assignee: Sergey Kalashnikov
>Priority: Major
>
> Currently cache.size() returns number of entries in cache trees while there 
> can be several versions of one key-value pairs.
> We should use tx snapshot and count all passed mvcc filter entries instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8206) SQL TX: Rewrite MvccCursor

2018-04-18 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov reassigned IGNITE-8206:
--

Assignee: Sergey Kalashnikov

> SQL TX: Rewrite MvccCursor
> --
>
> Key: IGNITE-8206
> URL: https://issues.apache.org/jira/browse/IGNITE-8206
> Project: Ignite
>  Issue Type: Task
>Reporter: Igor Seliverstov
>Assignee: Sergey Kalashnikov
>Priority: Major
>
> Currently we materialize rows before passing Mvcc filter. It means we 
> deserialize all rows, including invisible for snapshot.
> We need to pass the filter to BPlusTree.find() method instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-7973) TX SQL: plain INSERT should not be broadcasted to all data nodes

2018-04-18 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov resolved IGNITE-7973.

Resolution: Duplicate

This is fixed as part of .

> TX SQL: plain INSERT should not be broadcasted to all data nodes
> 
>
> Key: IGNITE-7973
> URL: https://issues.apache.org/jira/browse/IGNITE-7973
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Priority: Critical
> Fix For: 2.6
>
>
> At the moment all {{INSERT}} statements are broadcasted. This could be OK for 
> {{INSERT ... SELECT}}, but is definitely not needed for {{INSERT ... 
> VALUES}}. Instead we should construct final key-value pairs locally, and then 
> send them to affected data nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8239) SQL TX: Do not use skipReducer flag for MVCC DML requests

2018-04-18 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov reassigned IGNITE-8239:
--

Assignee: Sergey Kalashnikov

> SQL TX: Do not use skipReducer flag for MVCC DML requests
> -
>
> Key: IGNITE-8239
> URL: https://issues.apache.org/jira/browse/IGNITE-8239
> Project: Ignite
>  Issue Type: Task
>Reporter: Igor Seliverstov
>Assignee: Sergey Kalashnikov
>Priority: Major
>
> Currently we explicitly set skipReducer flag to true to get UpdatePlan with 
> DmlDistributedPlanInfo. We should check if mvcc is enabled instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8013) CPP: Check pending snapshots in BinaryTypeManager::GetHandler

2018-04-13 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437387#comment-16437387
 ] 

Sergey Kalashnikov commented on IGNITE-8013:


[~isapego], Looks good to me.

> CPP: Check pending snapshots in BinaryTypeManager::GetHandler
> -
>
> Key: IGNITE-8013
> URL: https://issues.apache.org/jira/browse/IGNITE-8013
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Affects Versions: 2.0
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: cpp
>
> This will improve performance a lot, when using operations like {{PutAll()}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8012) CPP: BinaryWriter::WriteElement should accept const reference instead of value.

2018-04-13 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437385#comment-16437385
 ] 

Sergey Kalashnikov commented on IGNITE-8012:


[~isapego], Looks good to me.

> CPP: BinaryWriter::WriteElement should accept const reference instead of 
> value.
> ---
>
> Key: IGNITE-8012
> URL: https://issues.apache.org/jira/browse/IGNITE-8012
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Affects Versions: 2.0
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: cpp
>
> This will improve performance in case when large objects are used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8032) Fix issues within TX DML reducer.

2018-04-13 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-8032:
---
Component/s: sql

> Fix issues within TX DML reducer.
> -
>
> Key: IGNITE-8032
> URL: https://issues.apache.org/jira/browse/IGNITE-8032
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
>Priority: Major
>
> The following code review issues need to be addressed:
> 1. GridNearTxQueryResultsEnlistFuture
> 1.1. remove GridCacheCompoundIdentityFuture implementation.
> remote mini-futures.
> 1.2  Improve concurrency around sendNextBatches calls.
> 2. Refactor iterator UpdateIteratorAdapter/TxDmlReducerIterator to avoid 
> multi-level nesting.
> 3. Normalize usage of IgniteBiTuple(k,v)/Object(key) instead of Object[] to 
> represent rows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7682) C++: LocalSize cache functions

2018-03-23 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411402#comment-16411402
 ] 

Sergey Kalashnikov commented on IGNITE-7682:


[~isapego], Looks good to me.

> C++: LocalSize cache functions
> --
>
> Key: IGNITE-7682
> URL: https://issues.apache.org/jira/browse/IGNITE-7682
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 1.5.0.final
> Environment: Ignite builded by jdk1.8.0_152 with sources 
> tag:ignite-2.3
> cpp libs builded by Microsoft Visual Studio Enterprise 2015 Version 
> 14.0.25431.01 Update 3
> all x64
>Reporter: Roman Bastanov
>Assignee: Igor Sapego
>Priority: Major
> Fix For: 2.5
>
>
> LocalSize functions with all variations of CachePeekMode returns same results.
> They always returns all cache size, the sum of all node caches.
> {code}
> auto cache = IgniteNode.GetCache<...>(cache_name);
> cache.LocalSize(ignite::cache::CachePeekMode::BACKUP)
> cache.LocalSize(ignite::cache::CachePeekMode::NEAR_CACHE)
> cache.LocalSize(ignite::cache::CachePeekMode::OFFHEAP)
> cache.LocalSize(ignite::cache::CachePeekMode::ONHEAP)
> cache.LocalSize(ignite::cache::CachePeekMode::PRIMARY)
> cache.LocalSize(ignite::cache::CachePeekMode::SWAP)
> {code}
> Despite this, manually calculations are correct, and returns local size(cache 
> on this node).
> {code}
> auto query = cache::query::ScanQuery();
> query.SetLocal(true);
> auto cursor = cache.Query(query);
> while (cursor.HasNext()) {
> cache_size++;
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8032) Fix issues within TX DML reducer.

2018-03-23 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-8032:
---
Issue Type: Task  (was: Sub-task)
Parent: (was: IGNITE-4191)

> Fix issues within TX DML reducer.
> -
>
> Key: IGNITE-8032
> URL: https://issues.apache.org/jira/browse/IGNITE-8032
> Project: Ignite
>  Issue Type: Task
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
>Priority: Major
>
> The following code review issues need to be addressed:
> 1. GridNearTxQueryResultsEnlistFuture
> 1.1. remove GridCacheCompoundIdentityFuture implementation.
> remote mini-futures.
> 1.2  Improve concurrency around sendNextBatches calls.
> 2. Refactor iterator UpdateIteratorAdapter/TxDmlReducerIterator to avoid 
> multi-level nesting.
> 3. Normalize usage of IgniteBiTuple(k,v)/Object(key) instead of Object[] to 
> represent rows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8032) Fix issues within TX DML reducer.

2018-03-23 Thread Sergey Kalashnikov (JIRA)
Sergey Kalashnikov created IGNITE-8032:
--

 Summary: Fix issues within TX DML reducer.
 Key: IGNITE-8032
 URL: https://issues.apache.org/jira/browse/IGNITE-8032
 Project: Ignite
  Issue Type: Sub-task
Reporter: Sergey Kalashnikov
Assignee: Sergey Kalashnikov


The following code review issues need to be addressed:

1. GridNearTxQueryResultsEnlistFuture

1.1. remove GridCacheCompoundIdentityFuture implementation.
remote mini-futures.
1.2  Improve concurrency around sendNextBatches calls.

2. Refactor iterator UpdateIteratorAdapter/TxDmlReducerIterator to avoid 
multi-level nesting.

3. Normalize usage of IgniteBiTuple(k,v)/Object(key) instead of Object[] to 
represent rows.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-7604) SQL TX: Allow DML operations with reducer

2018-03-23 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov resolved IGNITE-7604.

Resolution: Fixed

> SQL TX: Allow DML operations with reducer
> -
>
> Key: IGNITE-7604
> URL: https://issues.apache.org/jira/browse/IGNITE-7604
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Igor Seliverstov
>Assignee: Sergey Kalashnikov
>Priority: Major
>
> The following protocol is proposed for DML request with non-trivial reduce 
> step within a transaction.
> 1. The SQL select part is deduced from a DML request and is split to form 
> two-step map/reduce request.
> 2. Map query requests are sent to data nodes which execute them locally.
> 3. Resulting data pages are sent to originating node (reducer), which 
> accumulates them.
> 4. Originating node performs reduce step on data received from map nodes and 
> forms batches of updates to apply to target table.
> 5. Lock requests containing delta updates are mapped and sent to data nodes 
> storing the corresponding keys.
> 6. Lock acks are received at originating node and accumulated there, 
> producing the total update counter.
> Note that no locks are acquired when map requests are processed. 
> This is consistent with what Oracle and PostgreSQL do (but not MySQL!) with 
> respect to locks within complex DML statements.
> The Oracle docs 
> (https://docs.oracle.com/cd/B28359_01/server.111/b28318/consist.htm#CNCPT1351)
>  specifically states:
> The transaction that contains a DML statement does not need to acquire row 
> locks on any rows selected by a subquery or an implicit query, such as a 
> query in a WHERE clause. A subquery or implicit query in a DML statement is 
> guaranteed to be consistent as of the start of the query and does not see the 
> effects of the DML statement it is part of.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7604) SQL TX: Allow DML operations with reducer

2018-03-20 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406167#comment-16406167
 ] 

Sergey Kalashnikov commented on IGNITE-7604:


[~vozerov],

p.1,3,5,6,7) fixed.
p.2) In that case, the query will be processed with current one-step "query 
enlist" protocol.
In fact, 'isLocSubqry' is always false for anything other than INSERT, MERGE or 
BULK_LOAD (the latter being a mistake I guess).
Thus, if we remove the check for mode equal to INSERT or MERGE here, the UPDATE 
and DELETE queries will always be processed with new "map/reduce + batch" 
protocol, which is incorrect.

p.4) First we check that transaction context has a timeout handler installed(in 
absense of current operation it is a GridNearTxLocal own handler that would 
initiate a rollback).
We need to replace current "idle" handler with our own handler for the duration 
of our operation. So we call remove tx.removeTimeoutHandler().
If we fail to remove old handler, it means that tx is already timed out and we 
must arrange to be notified when transaction is rolled back in order to fail 
our own future.
If removal was successful, we install our own handler.
When our future is complete we restore the GridNearTxLocal's handler with a 
call to tx.addTimeoutHandler().

> SQL TX: Allow DML operations with reducer
> -
>
> Key: IGNITE-7604
> URL: https://issues.apache.org/jira/browse/IGNITE-7604
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Igor Seliverstov
>Assignee: Sergey Kalashnikov
>Priority: Major
>
> The following protocol is proposed for DML request with non-trivial reduce 
> step within a transaction.
> 1. The SQL select part is deduced from a DML request and is split to form 
> two-step map/reduce request.
> 2. Map query requests are sent to data nodes which execute them locally.
> 3. Resulting data pages are sent to originating node (reducer), which 
> accumulates them.
> 4. Originating node performs reduce step on data received from map nodes and 
> forms batches of updates to apply to target table.
> 5. Lock requests containing delta updates are mapped and sent to data nodes 
> storing the corresponding keys.
> 6. Lock acks are received at originating node and accumulated there, 
> producing the total update counter.
> Note that no locks are acquired when map requests are processed. 
> This is consistent with what Oracle and PostgreSQL do (but not MySQL!) with 
> respect to locks within complex DML statements.
> The Oracle docs 
> (https://docs.oracle.com/cd/B28359_01/server.111/b28318/consist.htm#CNCPT1351)
>  specifically states:
> The transaction that contains a DML statement does not need to acquire row 
> locks on any rows selected by a subquery or an implicit query, such as a 
> query in a WHERE clause. A subquery or implicit query in a DML statement is 
> guaranteed to be consistent as of the start of the query and does not see the 
> effects of the DML statement it is part of.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7811) ODBC: Implement connection failover

2018-03-19 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404601#comment-16404601
 ] 

Sergey Kalashnikov commented on IGNITE-7811:


[~isapego], Looks good to me. Thanks!

> ODBC: Implement connection failover
> ---
>
> Key: IGNITE-7811
> URL: https://issues.apache.org/jira/browse/IGNITE-7811
> Project: Ignite
>  Issue Type: New Feature
>  Components: odbc
>Affects Versions: 2.4
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: odbc
> Fix For: 2.5
>
>
> Currently user has to manually connect to some specific Ignite server.
> Implement some kind of automatic failover where ODBC driver knows about 
> multiple nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7811) ODBC: Implement connection failover

2018-03-19 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404539#comment-16404539
 ] 

Sergey Kalashnikov commented on IGNITE-7811:


[~isapego], I have reviewed the changes. Looks good. I only have few minor 
comments. Please check the upsource.

> ODBC: Implement connection failover
> ---
>
> Key: IGNITE-7811
> URL: https://issues.apache.org/jira/browse/IGNITE-7811
> Project: Ignite
>  Issue Type: New Feature
>  Components: odbc
>Affects Versions: 2.4
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: odbc
> Fix For: 2.5
>
>
> Currently user has to manually connect to some specific Ignite server.
> Implement some kind of automatic failover where ODBC driver knows about 
> multiple nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7851) .NET: linq query throws "Hexadecimal string with odd number of characters" exception

2018-03-07 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16389333#comment-16389333
 ] 

Sergey Kalashnikov commented on IGNITE-7851:


[~alexey.tank2], looks good to me.

> .NET: linq query throws "Hexadecimal string with odd number of characters" 
> exception
> 
>
> Key: IGNITE-7851
> URL: https://issues.apache.org/jira/browse/IGNITE-7851
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, sql
>Affects Versions: 2.3
>Reporter: Alexey Popov
>Assignee: Alexey Popov
>Priority: Major
> Attachments: FirstOrDefaultKeyIssue.zip
>
>
> Simple linq query with .Key throws an exception
> {code}
> var models = cache.AsCacheQueryable();
> var entry = models.FirstOrDefault(m => m.Key == @"TST-1/1");
> {code}
> Apache.Ignite.Core.Common.IgniteException: Hexadecimal string with odd number 
> of characters: "TST-1/1" [90003-195] ---> 
> Apache.Ignite.Core.Common.JavaException: class 
> org.apache.ignite.IgniteCheckedException: Hexadecimal string with odd number 
> of characters: "TST-1/1" [90003-195]
> at 
> org.apache.ignite.internal.processors.platform.utils.PlatformUtils.unwrapQueryException(PlatformUtils.java:519)
> at 
> org.apache.ignite.internal.processors.platform.cache.PlatformCache.runFieldsQuery(PlatformCache.java:1240)
> at 
> org.apache.ignite.internal.processors.platform.cache.PlatformCache.processInStreamOutObject(PlatformCache.java:877)
> at 
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
> Caused by: javax.cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Hexadecimal string with odd number 
> of characters: "TST-1/1" [90003-195]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:1917)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:585)
> at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:368)
> at 
> org.apache.ignite.internal.processors.platform.cache.PlatformCache.runFieldsQuery(PlatformCache.java:1234)
> ... 2 more
> Caused by: class org.apache.ignite.IgniteCheckedException: Hexadecimal string 
> with odd number of characters: "TST-1/1" [90003-195]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2468)
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:1914)
> ... 5 more
> Caused by: org.h2.message.DbException: Hexadecimal string with odd number of 
> characters: "TST-1/1" [90003-195]
> at org.h2.message.DbException.get(DbException.java:179)
> at org.h2.message.DbException.get(DbException.java:155)
> at org.h2.util.StringUtils.convertHexToBytes(StringUtils.java:930)
> at org.h2.value.Value.convertTo(Value.java:957)
> at 
> org.apache.ignite.internal.processors.query.h2.H2Utils.convert(H2Utils.java:262)
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.bindPartitionInfoParameter(IgniteH2Indexing.java:2520)
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.calculateQueryPartitions(IgniteH2Indexing.java:2480)
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeTwoStepsQuery(IgniteH2Indexing.java:1556)
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1500)
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:1909)
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:1907)
> at 
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2445)
> ... 6 more
> Caused by: org.h2.jdbc.JdbcSQLException: Hexadecimal string with odd number 
> of characters: "TST-1/1" [90003-195]
> at 
> org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
> ... 19 more
>--- End of inner exception stack trace ---
>at Apache.Ignite.Core.Impl.Unmanaged.UnmanagedCallbacks.Error(Void* 
> target, Int32 errType, SByte* errClsChars, Int32 errClsCharsLen, SByte* 
> errMsgChars, Int32 errMsgCharsLen, SByte* stackTraceChars, Int32 
> stackTraceCharsLen, Void* errData, Int32 errDataLen)
>   

[jira] [Comment Edited] (IGNITE-7848) On Date type mismatch DDL functionality is broken

2018-03-01 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381921#comment-16381921
 ] 

Sergey Kalashnikov edited comment on IGNITE-7848 at 3/1/18 12:32 PM:
-

[~andmed], I have looked into this issue and unfortunately this is kind of 
expected behaviour now. Current implementation of DDL ADD and DROP column has a 
limitation that it only update meta and does not modify the data itslef. So the 
CREATE INDEX and SELECT in your test fail because they see old field data which 
are of wrong type. With current implementation you need to clean the old fields 
manually first (NULL them with update f.ex).


was (Author: skalashnikov):
[~andmed], I have looked into this issue and unfortunately this is kind of 
expected behaviour now. Current implementation of DDL ADD and DROP column has a 
limitation does not modify the data. So the CREATE INDEX and SELECT in your 
test fail because they see old field data which are of wrong type. With current 
implementation you need to clean the old fields manually first (NULL them with 
update f.ex).

> On Date type mismatch DDL functionality is broken
> -
>
> Key: IGNITE-7848
> URL: https://issues.apache.org/jira/browse/IGNITE-7848
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrew Medvedev
>Priority: Major
> Attachments: DateCannotBeCastTest.java
>
>
> when Date type in value object is originally set as java.util.Date, then 
> after ADD COLUMN IF NOT EXISTS and CREATE INDEX on this field, basic SQL 
> functionality (SELECT) is broken



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7848) On Date type mismatch DDL functionality is broken

2018-03-01 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381921#comment-16381921
 ] 

Sergey Kalashnikov commented on IGNITE-7848:


[~andmed], I have looked into this issue and unfortunately this is kind of 
expected behaviour now. Current implementation of DDL ADD and DROP column has a 
limitation does not modify the data. So the CREATE INDEX and SELECT in your 
test fail because they see old field data which are of wrong type. With current 
implementation you need to clean the old fields manually first (NULL them with 
update f.ex).

> On Date type mismatch DDL functionality is broken
> -
>
> Key: IGNITE-7848
> URL: https://issues.apache.org/jira/browse/IGNITE-7848
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrew Medvedev
>Priority: Major
> Attachments: DateCannotBeCastTest.java
>
>
> when Date type in value object is originally set as java.util.Date, then 
> after ADD COLUMN IF NOT EXISTS and CREATE INDEX on this field, basic SQL 
> functionality (SELECT) is broken



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7843) SQL: ALTER TABLE DROP column may break certain SQL queries

2018-02-28 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-7843:
---
Description: The command DROP column leads to subsequent SQL errors if 
there is some indexed field next to removed field.  (was: The command DROP 
table leads to subsequent SQL errors if there is some indexed field next to 
removed field.)

> SQL: ALTER TABLE DROP column may break certain SQL queries
> --
>
> Key: IGNITE-7843
> URL: https://issues.apache.org/jira/browse/IGNITE-7843
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.4
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
>Priority: Blocker
>
> The command DROP column leads to subsequent SQL errors if there is some 
> indexed field next to removed field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7843) SQL: ALTER TABLE DROP column may break certain SQL queries

2018-02-28 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380408#comment-16380408
 ] 

Sergey Kalashnikov commented on IGNITE-7843:


[~vozerov], Could you please review the fix?

> SQL: ALTER TABLE DROP column may break certain SQL queries
> --
>
> Key: IGNITE-7843
> URL: https://issues.apache.org/jira/browse/IGNITE-7843
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.4
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
>Priority: Blocker
>
> The command DROP table leads to subsequent SQL errors if there is some 
> indexed field next to removed field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7843) SQL: ALTER TABLE DROP column may break certain SQL queries

2018-02-28 Thread Sergey Kalashnikov (JIRA)
Sergey Kalashnikov created IGNITE-7843:
--

 Summary: SQL: ALTER TABLE DROP column may break certain SQL queries
 Key: IGNITE-7843
 URL: https://issues.apache.org/jira/browse/IGNITE-7843
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.4
Reporter: Sergey Kalashnikov
Assignee: Sergey Kalashnikov


The command DROP table leads to subsequent SQL errors if there is some indexed 
field next to removed field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7362) ODBC: Third party libraries truncate any inserted varlen data to ColumnSize

2018-02-26 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376634#comment-16376634
 ] 

Sergey Kalashnikov commented on IGNITE-7362:


[~isapego], Looks good to me. Thanks!

> ODBC: Third party libraries truncate any inserted varlen data to ColumnSize
> ---
>
> Key: IGNITE-7362
> URL: https://issues.apache.org/jira/browse/IGNITE-7362
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.3
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
> Fix For: 2.5
>
>
> Third-party frameworks and ODBC bindings for different languages use metadata 
> requests results for columns (such as {{SQL_COLUMN_PRECISION}}) to truncate 
> varlen data, inserted by the user, which is only 64 by default.
> {code:java}
>  ini_set("display_errors", 1);
> error_reporting(E_ALL);
> try {
> $ignite = new PDO('odbc:Apache Ignite');
> $ignite->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
> $sql = 'DROP TABLE IF EXISTS test';
> $ignite->exec($sql);
> $sql = 'CREATE TABLE IF NOT EXISTS test (id int PRIMARY KEY, userkey 
> VARCHAR(1000))';
> $ignite->exec($sql);
> $id = 1;
> $varval = 'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed 
> do elit, sed';
> $dbs = $ignite->prepare("INSERT INTO test (id, userkey) VALUES ($id, 
> '$varval')");
> $dbs->execute();
> $dbs = $ignite->prepare("SELECT userkey from test where id=$id");
> $dbs->execute();
> $res = $dbs->fetchColumn();
> assert($varval == $res);
> } catch (PDOException $e) {
> print "Error!: " . $e->getMessage() . "\n";
> die();
> }
> ?>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7604) SQL TX: Allow DML operations with reducer

2018-02-22 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-7604:
---
Description: 
The following protocol is proposed for DML request with non-trivial reduce step 
within a transaction.

1. The SQL select part is deduced from a DML request and is split to form 
two-step map/reduce request.

2. Map query requests are sent to data nodes which execute them locally.

3. Resulting data pages are sent to originating node (reducer), which 
accumulates them.

4. Originating node performs reduce step on data received from map nodes and 
forms batches of updates to apply to target table.

5. Lock requests containing delta updates are mapped and sent to data nodes 
storing the corresponding keys.

6. Lock acks are received at originating node and accumulated there, producing 
the total update counter.

Note that no locks are acquired when map requests are processed. 
This is consistent with what Oracle and PostgreSQL do (but not MySQL!) with 
respect to locks within complex DML statements.

The Oracle docs 
(https://docs.oracle.com/cd/B28359_01/server.111/b28318/consist.htm#CNCPT1351) 
specifically states:

The transaction that contains a DML statement does not need to acquire row 
locks on any rows selected by a subquery or an implicit query, such as a query 
in a WHERE clause. A subquery or implicit query in a DML statement is 
guaranteed to be consistent as of the start of the query and does not see the 
effects of the DML statement it is part of.


  was:Allow DML operations with reducer


> SQL TX: Allow DML operations with reducer
> -
>
> Key: IGNITE-7604
> URL: https://issues.apache.org/jira/browse/IGNITE-7604
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Igor Seliverstov
>Assignee: Sergey Kalashnikov
>Priority: Major
>
> The following protocol is proposed for DML request with non-trivial reduce 
> step within a transaction.
> 1. The SQL select part is deduced from a DML request and is split to form 
> two-step map/reduce request.
> 2. Map query requests are sent to data nodes which execute them locally.
> 3. Resulting data pages are sent to originating node (reducer), which 
> accumulates them.
> 4. Originating node performs reduce step on data received from map nodes and 
> forms batches of updates to apply to target table.
> 5. Lock requests containing delta updates are mapped and sent to data nodes 
> storing the corresponding keys.
> 6. Lock acks are received at originating node and accumulated there, 
> producing the total update counter.
> Note that no locks are acquired when map requests are processed. 
> This is consistent with what Oracle and PostgreSQL do (but not MySQL!) with 
> respect to locks within complex DML statements.
> The Oracle docs 
> (https://docs.oracle.com/cd/B28359_01/server.111/b28318/consist.htm#CNCPT1351)
>  specifically states:
> The transaction that contains a DML statement does not need to acquire row 
> locks on any rows selected by a subquery or an implicit query, such as a 
> query in a WHERE clause. A subquery or implicit query in a DML statement is 
> guaranteed to be consistent as of the start of the query and does not see the 
> effects of the DML statement it is part of.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-7604) SQL TX: Allow DML operations with reducer

2018-02-22 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov reassigned IGNITE-7604:
--

Assignee: Sergey Kalashnikov

> SQL TX: Allow DML operations with reducer
> -
>
> Key: IGNITE-7604
> URL: https://issues.apache.org/jira/browse/IGNITE-7604
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Igor Seliverstov
>Assignee: Sergey Kalashnikov
>Priority: Major
>
> Allow DML operations with reducer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-3111) .NET: Configure SSL without Spring

2018-02-13 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16362031#comment-16362031
 ] 

Sergey Kalashnikov commented on IGNITE-3111:


[~alexey.tank2], looks good to me.

> .NET: Configure SSL without Spring
> --
>
> Key: IGNITE-3111
> URL: https://issues.apache.org/jira/browse/IGNITE-3111
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Affects Versions: 1.6
>Reporter: Pavel Tupitsyn
>Assignee: Alexey Popov
>Priority: Major
>  Labels: .net
> Fix For: 2.5
>
>
> User should be able to configure SLL in .NET terms without Spring and Java 
> KeyStore.
> See https://apacheignite.readme.io/docs/ssltls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7512) Variable updated should be checked for null before invocation of ctx.validateKeyAndValue(entry.key(), updated) in GridDhtAtomicCache.updateWithBatch

2018-02-08 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356677#comment-16356677
 ] 

Sergey Kalashnikov commented on IGNITE-7512:


[~agoncharuk], Could you please review this small fix? 
TC seems to be OK. 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8_IgniteTests24Java8=pull%2F3429%2Fhead

> Variable updated should be checked for null before invocation of 
> ctx.validateKeyAndValue(entry.key(), updated) in 
> GridDhtAtomicCache.updateWithBatch
> 
>
> Key: IGNITE-7512
> URL: https://issues.apache.org/jira/browse/IGNITE-7512
> Project: Ignite
>  Issue Type: Bug
>Reporter: Evgenii Zhuravlev
>Assignee: Sergey Kalashnikov
>Priority: Major
> Fix For: 2.5
>
>
> Or it could lead to the NPE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7192) JDBC: support FQDN to multiple IPs during connection establishment

2018-02-07 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355421#comment-16355421
 ] 

Sergey Kalashnikov commented on IGNITE-7192:


[~guseinov], looks good to me now. Thanks

> JDBC: support FQDN to multiple IPs during connection establishment
> --
>
> Key: IGNITE-7192
> URL: https://issues.apache.org/jira/browse/IGNITE-7192
> Project: Ignite
>  Issue Type: Improvement
>  Components: jdbc
>Affects Versions: 2.1
>Reporter: Alexey Popov
>Assignee: Roman Guseinov
>Priority: Major
>  Labels: pull-request-available
>
> Thin JDBC driver may have FQDN (host name) at a connection string.
> Currently, it resolves this FQDN to one IP and tries to connect to this IP 
> only.
> It is better to try to connect to multiple IPs one-by-one if DNS returns 
> multiple A-records (FQDN can be resolved to several IPs) until successful 
> connection. It could give a simple fallback option for the JDBC thin driver 
> users.
> A similar functionality is already implemented in ODBC driver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6625) JDBC thin: support SSL connection to Ignite node

2018-02-07 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355412#comment-16355412
 ] 

Sergey Kalashnikov commented on IGNITE-6625:


[~tledkov-gridgain], the changes look good to me. Thank you.

> JDBC thin: support SSL connection to Ignite node
> 
>
> Key: IGNITE-6625
> URL: https://issues.apache.org/jira/browse/IGNITE-6625
> Project: Ignite
>  Issue Type: Improvement
>  Components: jdbc
>Affects Versions: 2.2
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.5
>
>
> SSL connection must be supported for JDBC thin driver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6625) JDBC thin: support SSL connection to Ignite node

2018-02-02 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350609#comment-16350609
 ] 

Sergey Kalashnikov commented on IGNITE-6625:


[~tledkov-gridgain], here are my comments:
1. Please remove unused imports from {{ConnectionProperties.java}} and 
{{JdbcThinConnectionSelfTest.java}}
2. It might be helpful to move all SSL-specific stuff from 
{{JdbcThinTcpIo.java}} to a sub-class.
3. Please consider adding tests that would check error for the following cases:
invalid/unsupported ssl protocol, key store type, key algorithm. The 
corresponding connection params seems not covered by tests.

> JDBC thin: support SSL connection to Ignite node
> 
>
> Key: IGNITE-6625
> URL: https://issues.apache.org/jira/browse/IGNITE-6625
> Project: Ignite
>  Issue Type: Improvement
>  Components: jdbc
>Affects Versions: 2.2
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.5
>
>
> SSL connection must be supported for JDBC thin driver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7192) JDBC: support FQDN to multiple IPs during connection establishment

2018-02-01 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348297#comment-16348297
 ] 

Sergey Kalashnikov commented on IGNITE-7192:


[~guseinov], Please correct the coding style across code and tests (see 
https://cwiki.apache.org/confluence/display/IGNITE/Coding+Guidelines)
I noticed multiple style violations:
- Braces style in try/catch
- Braces around multi- and single-line operators.
- @Override must be on the same line as the function name.
- No empty line between the function description and params in javadoc.
- Use of empty lines between statements.

> JDBC: support FQDN to multiple IPs during connection establishment
> --
>
> Key: IGNITE-7192
> URL: https://issues.apache.org/jira/browse/IGNITE-7192
> Project: Ignite
>  Issue Type: Improvement
>  Components: jdbc
>Affects Versions: 2.1
>Reporter: Alexey Popov
>Assignee: Roman Guseinov
>Priority: Major
>  Labels: pull-request-available
>
> Thin JDBC driver may have FQDN (host name) at a connection string.
> Currently, it resolves this FQDN to one IP and tries to connect to this IP 
> only.
> It is better to try to connect to multiple IPs one-by-one if DNS returns 
> multiple A-records (FQDN can be resolved to several IPs) until successful 
> connection. It could give a simple fallback option for the JDBC thin driver 
> users.
> A similar functionality is already implemented in ODBC driver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7515) ODBC: Socket error messages may be missing on linux

2018-01-24 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-7515:
---
Description: 
Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 2 
flavors. 
 One returns error code and another character string.
 Current code seems to expect the XSI version, which returns an int.
 But if we happen to compile against the other version, the error messages will 
be missing from the diagnostic messages and logs.

The man page for {{strerror_r()}} provides the macros to distinguish the two 
versions, but that is not very portable.

I suggest that we create a test in {{configure.ac}} to define specific macro to 
tell what {{strerror_r()}} variant we have.

 
{{
AC_CACHE_CHECK(
  [for support of strerror_r that returns int],
  [odbc_have_int_strerror_r],
  [AC_RUN_IFELSE(
[AC_LANG_SOURCE[
  #include 
  #include 

  int main(int argc, char** argv) {
char buf[256] = {0};

int ret = strerror_r(ENOMEM, buf, sizeof(buf));

return ret;
  }
]],
[odbc_have_int_strerror_r=yes],
[odbc_have_int_strerror_r=no])])

if test "$odbc_have_int_strerror_r" = "yes"; then
  AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
strerror_r])
}}

  was:
Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 2 
flavors. 
 One returns error code and another character string.
 Current code seems to expect the XSI version, which returns an int.
 But if we happen to compile against the other version, the error messages will 
be missing from the diagnostic messages and logs.

The man page for {{strerror_r()}} provides the macros to distinguish the two 
versions, but that is not very portable.

I suggest that we create a test in {{configure.ac}} to define specific macro to 
tell what {{strerror_r()}} variant we have.

 

{quote}
AC_CACHE_CHECK(
 [for support of strerror_r that returns int],
 [odbc_have_int_strerror_r],
 [AC_RUN_IFELSE(
 [AC_LANG_SOURCE[
 #include 
 #include 

int main(int argc, char** argv)
{
char buf[256];

int ret = strerror_r(ENOMEM, buf, sizeof(buf));

return ret;
 }
 ]],
 [odbc_have_int_strerror_r=yes],
 [odbc_have_int_strerror_r=no])])

if test "$odbc_have_int_strerror_r" = "yes"; then
 AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
strerror_r])
{quote}


> ODBC: Socket error messages may be missing on linux
> ---
>
> Key: IGNITE-7515
> URL: https://issues.apache.org/jira/browse/IGNITE-7515
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.3
>Reporter: Sergey Kalashnikov
>Priority: Minor
>
> Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 
> 2 flavors. 
>  One returns error code and another character string.
>  Current code seems to expect the XSI version, which returns an int.
>  But if we happen to compile against the other version, the error messages 
> will be missing from the diagnostic messages and logs.
> The man page for {{strerror_r()}} provides the macros to distinguish the two 
> versions, but that is not very portable.
> I suggest that we create a test in {{configure.ac}} to define specific macro 
> to tell what {{strerror_r()}} variant we have.
>  
> {{
> AC_CACHE_CHECK(
>   [for support of strerror_r that returns int],
>   [odbc_have_int_strerror_r],
>   [AC_RUN_IFELSE(
> [AC_LANG_SOURCE[
>   #include 
>   #include 
>   int main(int argc, char** argv) {
> char buf[256] = {0};
> int ret = strerror_r(ENOMEM, buf, sizeof(buf));
> return ret;
>   }
> ]],
> [odbc_have_int_strerror_r=yes],
> [odbc_have_int_strerror_r=no])])
> if test "$odbc_have_int_strerror_r" = "yes"; then
>   AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
> strerror_r])
> }}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7515) ODBC: Socket error messages may be missing on linux

2018-01-24 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-7515:
---
Description: 
Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 2 
flavors. 
 One returns error code and another character string.
 Current code seems to expect the XSI version, which returns an int.
 But if we happen to compile against the other version, the error messages will 
be missing from the diagnostic messages and logs.

The man page for {{strerror_r()}} provides the macros to distinguish the two 
versions, but that is not very portable.

I suggest that we create a test in {{configure.ac}} to define specific macro to 
tell what {{strerror_r()}} variant we have.

 

{quote}
AC_CACHE_CHECK(
 [for support of strerror_r that returns int],
 [odbc_have_int_strerror_r],
 [AC_RUN_IFELSE(
 [AC_LANG_SOURCE[
 #include 
 #include 

int main(int argc, char** argv)
{
char buf[256];

int ret = strerror_r(ENOMEM, buf, sizeof(buf));

return ret;
 }
 ]],
 [odbc_have_int_strerror_r=yes],
 [odbc_have_int_strerror_r=no])])

if test "$odbc_have_int_strerror_r" = "yes"; then
 AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
strerror_r])
{quote}

  was:
Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 2 
flavors. 
 One returns error code and another character string.
 Current code seems to expect the XSI version, which returns an int.
 But if we happen to compile against the other version, the error messages will 
be missing from the diagnostic messages and logs.

The man page for {{strerror_r()}} provides the macros to distinguish the two 
versions, but that is not very portable.

I suggest that we create a test in {{configure.ac}} to define specific macro to 
tell what {{strerror_r()}} variant we have.

 

{quote}
AC_CACHE_CHECK(
 [for support of strerror_r that returns int],
 [odbc_have_int_strerror_r],
 [AC_RUN_IFELSE(
 [AC_LANG_SOURCE[
 #include 
 #include 

int main(int argc, char** argv)
{
char buf[256] = \\{0};

int ret = strerror_r(ENOMEM, buf, sizeof(buf));

return ret;
 }
 ]],
 [odbc_have_int_strerror_r=yes],
 [odbc_have_int_strerror_r=no])])

if test "$odbc_have_int_strerror_r" = "yes"; then
 AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
strerror_r])
{quote}


> ODBC: Socket error messages may be missing on linux
> ---
>
> Key: IGNITE-7515
> URL: https://issues.apache.org/jira/browse/IGNITE-7515
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.3
>Reporter: Sergey Kalashnikov
>Priority: Minor
>
> Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 
> 2 flavors. 
>  One returns error code and another character string.
>  Current code seems to expect the XSI version, which returns an int.
>  But if we happen to compile against the other version, the error messages 
> will be missing from the diagnostic messages and logs.
> The man page for {{strerror_r()}} provides the macros to distinguish the two 
> versions, but that is not very portable.
> I suggest that we create a test in {{configure.ac}} to define specific macro 
> to tell what {{strerror_r()}} variant we have.
>  
> {quote}
> AC_CACHE_CHECK(
>  [for support of strerror_r that returns int],
>  [odbc_have_int_strerror_r],
>  [AC_RUN_IFELSE(
>  [AC_LANG_SOURCE[
>  #include 
>  #include 
> int main(int argc, char** argv)
> {
> char buf[256];
> int ret = strerror_r(ENOMEM, buf, sizeof(buf));
> return ret;
>  }
>  ]],
>  [odbc_have_int_strerror_r=yes],
>  [odbc_have_int_strerror_r=no])])
> if test "$odbc_have_int_strerror_r" = "yes"; then
>  AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
> strerror_r])
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7515) ODBC: Socket error messages may be missing on linux

2018-01-24 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-7515:
---
Description: 
Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 2 
flavors. 
 One returns error code and another character string.
 Current code seems to expect the XSI version, which returns an int.
 But if we happen to compile against the other version, the error messages will 
be missing from the diagnostic messages and logs.

The man page for {{strerror_r()}} provides the macros to distinguish the two 
versions, but that is not very portable.

I suggest that we create a test in {{configure.ac}} to define specific macro to 
tell what {{strerror_r()}} variant we have.

 

{quote}
AC_CACHE_CHECK(
 [for support of strerror_r that returns int],
 [odbc_have_int_strerror_r],
 [AC_RUN_IFELSE(
 [AC_LANG_SOURCE[
 #include 
 #include 

int main(int argc, char** argv)
{
char buf[256] = \\{0};

int ret = strerror_r(ENOMEM, buf, sizeof(buf));

return ret;
 }
 ]],
 [odbc_have_int_strerror_r=yes],
 [odbc_have_int_strerror_r=no])])

if test "$odbc_have_int_strerror_r" = "yes"; then
 AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
strerror_r])
{quote}

  was:
Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 2 
flavors. 
 One returns error code and another character string.
 Current code seems to expect the XSI version, which returns an int.
 But if we happen to compile against the other version, the error messages will 
be missing from the diagnostic messages and logs.

The man page for {{strerror_r()}} provides the macros to distinguish the two 
versions, but that is not very portable.

I suggest that we create a test in {{configure.ac}} to define specific macro to 
tell what {{strerror_r()}} variant we have.

 

{quote}
AC_CACHE_CHECK(
 [for support of strerror_r that returns int],
 [odbc_have_int_strerror_r],
 [AC_RUN_IFELSE(
 [AC_LANG_SOURCE[
 #include 
 #include 

int main(int argc, char** argv)
{
char buf[256] = {0};

int ret = strerror_r(ENOMEM, buf, sizeof(buf));

return ret;
 }
 ]],
 [odbc_have_int_strerror_r=yes],
 [odbc_have_int_strerror_r=no])])

if test "$odbc_have_int_strerror_r" = "yes"; then
 AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
strerror_r])
{quote}


> ODBC: Socket error messages may be missing on linux
> ---
>
> Key: IGNITE-7515
> URL: https://issues.apache.org/jira/browse/IGNITE-7515
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.3
>Reporter: Sergey Kalashnikov
>Priority: Minor
>
> Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 
> 2 flavors. 
>  One returns error code and another character string.
>  Current code seems to expect the XSI version, which returns an int.
>  But if we happen to compile against the other version, the error messages 
> will be missing from the diagnostic messages and logs.
> The man page for {{strerror_r()}} provides the macros to distinguish the two 
> versions, but that is not very portable.
> I suggest that we create a test in {{configure.ac}} to define specific macro 
> to tell what {{strerror_r()}} variant we have.
>  
> {quote}
> AC_CACHE_CHECK(
>  [for support of strerror_r that returns int],
>  [odbc_have_int_strerror_r],
>  [AC_RUN_IFELSE(
>  [AC_LANG_SOURCE[
>  #include 
>  #include 
> int main(int argc, char** argv)
> {
> char buf[256] = \\{0};
> int ret = strerror_r(ENOMEM, buf, sizeof(buf));
> return ret;
>  }
>  ]],
>  [odbc_have_int_strerror_r=yes],
>  [odbc_have_int_strerror_r=no])])
> if test "$odbc_have_int_strerror_r" = "yes"; then
>  AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
> strerror_r])
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7515) ODBC: Socket error messages may be missing on linux

2018-01-24 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-7515:
---
Description: 
Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 2 
flavors. 
 One returns error code and another character string.
 Current code seems to expect the XSI version, which returns an int.
 But if we happen to compile against the other version, the error messages will 
be missing from the diagnostic messages and logs.

The man page for {{strerror_r()}} provides the macros to distinguish the two 
versions, but that is not very portable.

I suggest that we create a test in {{configure.ac}} to define specific macro to 
tell what {{strerror_r()}} variant we have.

 

{quote}
AC_CACHE_CHECK(
 [for support of strerror_r that returns int],
 [odbc_have_int_strerror_r],
 [AC_RUN_IFELSE(
 [AC_LANG_SOURCE[
 #include 
 #include 

int main(int argc, char** argv)
{
char buf[256] = {0};

int ret = strerror_r(ENOMEM, buf, sizeof(buf));

return ret;
 }
 ]],
 [odbc_have_int_strerror_r=yes],
 [odbc_have_int_strerror_r=no])])

if test "$odbc_have_int_strerror_r" = "yes"; then
 AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
strerror_r])
{quote}

  was:
Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 2 
flavors. 
 One returns error code and another character string.
 Current code seems to expect the XSI version, which returns an int.
 But if we happen to compile against the other version, the error messages will 
be missing from the diagnostic messages and logs.

The man page for {{strerror_r()}} provides the macros to distinguish the two 
versions, but that is not very portable.

I suggest that we create a test in {{configure.ac}} to define specific macro to 
tell what {{strerror_r()}} variant we have.

 

AC_CACHE_CHECK(
 [for support of strerror_r that returns int],
 [odbc_have_int_strerror_r],
 [AC_RUN_IFELSE(
 [AC_LANG_SOURCE[
 #include 
 #include 

int main(int argc, char** argv)

{

char buf[256] = \\{0};

int ret = strerror_r(ENOMEM, buf, sizeof(buf));

return ret;
 }
 ]],
 [odbc_have_int_strerror_r=yes],
 [odbc_have_int_strerror_r=no])])

if test "$odbc_have_int_strerror_r" = "yes"; then
 AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
strerror_r])


> ODBC: Socket error messages may be missing on linux
> ---
>
> Key: IGNITE-7515
> URL: https://issues.apache.org/jira/browse/IGNITE-7515
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.3
>Reporter: Sergey Kalashnikov
>Priority: Minor
>
> Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 
> 2 flavors. 
>  One returns error code and another character string.
>  Current code seems to expect the XSI version, which returns an int.
>  But if we happen to compile against the other version, the error messages 
> will be missing from the diagnostic messages and logs.
> The man page for {{strerror_r()}} provides the macros to distinguish the two 
> versions, but that is not very portable.
> I suggest that we create a test in {{configure.ac}} to define specific macro 
> to tell what {{strerror_r()}} variant we have.
>  
> {quote}
> AC_CACHE_CHECK(
>  [for support of strerror_r that returns int],
>  [odbc_have_int_strerror_r],
>  [AC_RUN_IFELSE(
>  [AC_LANG_SOURCE[
>  #include 
>  #include 
> int main(int argc, char** argv)
> {
> char buf[256] = {0};
> int ret = strerror_r(ENOMEM, buf, sizeof(buf));
> return ret;
>  }
>  ]],
>  [odbc_have_int_strerror_r=yes],
>  [odbc_have_int_strerror_r=no])])
> if test "$odbc_have_int_strerror_r" = "yes"; then
>  AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
> strerror_r])
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7515) ODBC: Socket error messages may be missing on linux

2018-01-24 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-7515:
---
Description: 
Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 2 
flavors. 
 One returns error code and another character string.
 Current code seems to expect the XSI version, which returns an int.
 But if we happen to compile against the other version, the error messages will 
be missing from the diagnostic messages and logs.

The man page for {{strerror_r()}} provides the macros to distinguish the two 
versions, but that is not very portable.

I suggest that we create a test in {{configure.ac}} to define specific macro to 
tell what {{strerror_r()}} variant we have.

 

AC_CACHE_CHECK(
 [for support of strerror_r that returns int],
 [odbc_have_int_strerror_r],
 [AC_RUN_IFELSE(
 [AC_LANG_SOURCE[
 #include 
 #include 

int main(int argc, char** argv) {
 char buf[256] = \{0};

int ret = strerror_r(ENOMEM, buf, sizeof(buf));

return ret;
 }
 ]],
 [odbc_have_int_strerror_r=yes],
 [odbc_have_int_strerror_r=no])])

if test "$odbc_have_int_strerror_r" = "yes"; then
 AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
strerror_r])

  was:
Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 2 
flavors. 
One returns error code and another character string.
Current code seems to expect the XSI version, which returns an int.
But if we happen to compile against the other version, the error messages will 
be missing from the diagnostic messages and logs.

The man page for {{strerror_r()}} provides the macros to distinguish the two 
versions, but that is not very portable.

I suggest that we create a test in {{configure.ac}} to define specific macro to 
tell what {{strerror_r()}} variant we have.

 

{{AC_CACHE_CHECK(}}
{{ [for support of strerror_r that returns int],}}
{{ [odbc_have_int_strerror_r],}}
{{ [AC_RUN_IFELSE(}}
{{ [AC_LANG_SOURCE[}}
{{ #include }}
{{ #include }}{{int main(int argc, char** argv) {}}
{{ char buf[256] = \{0};}}{{int ret = strerror_r(ENOMEM, buf, 
sizeof(buf));}}{{return ret;}}
{{ }}}
{{ ]],}}
{{ [odbc_have_int_strerror_r=yes],}}
{{ [odbc_have_int_strerror_r=no])])}}{{if test "$odbc_have_int_strerror_r" = 
"yes"; then}}
{{ AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
strerror_r])}}


> ODBC: Socket error messages may be missing on linux
> ---
>
> Key: IGNITE-7515
> URL: https://issues.apache.org/jira/browse/IGNITE-7515
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.3
>Reporter: Sergey Kalashnikov
>Priority: Minor
>
> Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 
> 2 flavors. 
>  One returns error code and another character string.
>  Current code seems to expect the XSI version, which returns an int.
>  But if we happen to compile against the other version, the error messages 
> will be missing from the diagnostic messages and logs.
> The man page for {{strerror_r()}} provides the macros to distinguish the two 
> versions, but that is not very portable.
> I suggest that we create a test in {{configure.ac}} to define specific macro 
> to tell what {{strerror_r()}} variant we have.
>  
> AC_CACHE_CHECK(
>  [for support of strerror_r that returns int],
>  [odbc_have_int_strerror_r],
>  [AC_RUN_IFELSE(
>  [AC_LANG_SOURCE[
>  #include 
>  #include 
> int main(int argc, char** argv) {
>  char buf[256] = \{0};
> int ret = strerror_r(ENOMEM, buf, sizeof(buf));
> return ret;
>  }
>  ]],
>  [odbc_have_int_strerror_r=yes],
>  [odbc_have_int_strerror_r=no])])
> if test "$odbc_have_int_strerror_r" = "yes"; then
>  AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
> strerror_r])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7515) ODBC: Socket error messages may be missing on linux

2018-01-24 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-7515:
---
Description: 
Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 2 
flavors. 
 One returns error code and another character string.
 Current code seems to expect the XSI version, which returns an int.
 But if we happen to compile against the other version, the error messages will 
be missing from the diagnostic messages and logs.

The man page for {{strerror_r()}} provides the macros to distinguish the two 
versions, but that is not very portable.

I suggest that we create a test in {{configure.ac}} to define specific macro to 
tell what {{strerror_r()}} variant we have.

 

AC_CACHE_CHECK(
 [for support of strerror_r that returns int],
 [odbc_have_int_strerror_r],
 [AC_RUN_IFELSE(
 [AC_LANG_SOURCE[
 #include 
 #include 

int main(int argc, char** argv)

{

char buf[256] = \\{0};

int ret = strerror_r(ENOMEM, buf, sizeof(buf));

return ret;
 }
 ]],
 [odbc_have_int_strerror_r=yes],
 [odbc_have_int_strerror_r=no])])

if test "$odbc_have_int_strerror_r" = "yes"; then
 AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
strerror_r])

  was:
Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 2 
flavors. 
 One returns error code and another character string.
 Current code seems to expect the XSI version, which returns an int.
 But if we happen to compile against the other version, the error messages will 
be missing from the diagnostic messages and logs.

The man page for {{strerror_r()}} provides the macros to distinguish the two 
versions, but that is not very portable.

I suggest that we create a test in {{configure.ac}} to define specific macro to 
tell what {{strerror_r()}} variant we have.

 

AC_CACHE_CHECK(
 [for support of strerror_r that returns int],
 [odbc_have_int_strerror_r],
 [AC_RUN_IFELSE(
 [AC_LANG_SOURCE[
 #include 
 #include 

int main(int argc, char** argv) {
 char buf[256] = \{0};

int ret = strerror_r(ENOMEM, buf, sizeof(buf));

return ret;
 }
 ]],
 [odbc_have_int_strerror_r=yes],
 [odbc_have_int_strerror_r=no])])

if test "$odbc_have_int_strerror_r" = "yes"; then
 AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
strerror_r])


> ODBC: Socket error messages may be missing on linux
> ---
>
> Key: IGNITE-7515
> URL: https://issues.apache.org/jira/browse/IGNITE-7515
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.3
>Reporter: Sergey Kalashnikov
>Priority: Minor
>
> Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 
> 2 flavors. 
>  One returns error code and another character string.
>  Current code seems to expect the XSI version, which returns an int.
>  But if we happen to compile against the other version, the error messages 
> will be missing from the diagnostic messages and logs.
> The man page for {{strerror_r()}} provides the macros to distinguish the two 
> versions, but that is not very portable.
> I suggest that we create a test in {{configure.ac}} to define specific macro 
> to tell what {{strerror_r()}} variant we have.
>  
> AC_CACHE_CHECK(
>  [for support of strerror_r that returns int],
>  [odbc_have_int_strerror_r],
>  [AC_RUN_IFELSE(
>  [AC_LANG_SOURCE[
>  #include 
>  #include 
> int main(int argc, char** argv)
> {
> char buf[256] = \\{0};
> int ret = strerror_r(ENOMEM, buf, sizeof(buf));
> return ret;
>  }
>  ]],
>  [odbc_have_int_strerror_r=yes],
>  [odbc_have_int_strerror_r=no])])
> if test "$odbc_have_int_strerror_r" = "yes"; then
>  AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
> strerror_r])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7515) ODBC: Socket error messages may be missing on linux

2018-01-24 Thread Sergey Kalashnikov (JIRA)
Sergey Kalashnikov created IGNITE-7515:
--

 Summary: ODBC: Socket error messages may be missing on linux
 Key: IGNITE-7515
 URL: https://issues.apache.org/jira/browse/IGNITE-7515
 Project: Ignite
  Issue Type: Bug
  Components: odbc
Affects Versions: 2.3
Reporter: Sergey Kalashnikov


Function {{GetSocketErrorMessage()}} uses {{strerror_r()}}, which can come in 2 
flavors. 
One returns error code and another character string.
Current code seems to expect the XSI version, which returns an int.
But if we happen to compile against the other version, the error messages will 
be missing from the diagnostic messages and logs.

The man page for {{strerror_r()}} provides the macros to distinguish the two 
versions, but that is not very portable.

I suggest that we create a test in {{configure.ac}} to define specific macro to 
tell what {{strerror_r()}} variant we have.

 

{{AC_CACHE_CHECK(}}
{{ [for support of strerror_r that returns int],}}
{{ [odbc_have_int_strerror_r],}}
{{ [AC_RUN_IFELSE(}}
{{ [AC_LANG_SOURCE[}}
{{ #include }}
{{ #include }}{{int main(int argc, char** argv) {}}
{{ char buf[256] = \{0};}}{{int ret = strerror_r(ENOMEM, buf, 
sizeof(buf));}}{{return ret;}}
{{ }}}
{{ ]],}}
{{ [odbc_have_int_strerror_r=yes],}}
{{ [odbc_have_int_strerror_r=no])])}}{{if test "$odbc_have_int_strerror_r" = 
"yes"; then}}
{{ AC_DEFINE([HAVE_INT_STRERROR_R], [1], [1 in case the runtime provides int 
strerror_r])}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-7512) Variable updated should be checked for null before invocation of ctx.validateKeyAndValue(entry.key(), updated) in GridDhtAtomicCache.updateWithBatch

2018-01-24 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov reassigned IGNITE-7512:
--

Assignee: Sergey Kalashnikov

> Variable updated should be checked for null before invocation of 
> ctx.validateKeyAndValue(entry.key(), updated) in 
> GridDhtAtomicCache.updateWithBatch
> 
>
> Key: IGNITE-7512
> URL: https://issues.apache.org/jira/browse/IGNITE-7512
> Project: Ignite
>  Issue Type: Bug
>Reporter: Evgenii Zhuravlev
>Assignee: Sergey Kalashnikov
>Priority: Major
>
> Or it could lead to the NPE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6810) ODBC: Add secure connection support

2018-01-22 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334191#comment-16334191
 ] 

Sergey Kalashnikov commented on IGNITE-6810:


[~isapego], I did the review. The changes look good to me.

BTW, can we add the test where server rejects the client certificate?

> ODBC: Add secure connection support
> ---
>
> Key: IGNITE-6810
> URL: https://issues.apache.org/jira/browse/IGNITE-6810
> Project: Ignite
>  Issue Type: New Feature
>  Components: odbc
>Affects Versions: 2.3
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: odbc
> Fix For: 2.5
>
> Attachments: new-ui.png
>
>
> Need to add support of SSL/TLS for ODBC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7359) SQL: DDL synchronization with query and cache API operations

2018-01-16 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327084#comment-16327084
 ] 

Sergey Kalashnikov commented on IGNITE-7359:


Implemented some very basic prototype. Need a primary review.

[~vozerov], could you please take a look.

> SQL: DDL synchronization with query and cache API operations
> 
>
> Key: IGNITE-7359
> URL: https://issues.apache.org/jira/browse/IGNITE-7359
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.4
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
>Priority: Major
> Fix For: 2.5
>
>
> We need to add a means to synchronize DDL operations with queries and cache 
> operations. This is required to facilitate future DDL improvements that would 
> require to modify user data and/or some cache metadata atomically. Basically 
> it is a sort of a global table lock.
> One way to achieve this is to re-use a mechanism used by the exchange 
> procedure. 
> An exchange is waiting for all already started cache operations to end before 
> proceeding itself.
> Likewise, the new cache operations won't start until exchange procedure has 
> completed.
> However, for DDL we only need to selectively defer operations that are made 
> on the same cache as DDL operation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7359) SQL: DDL synchronization with query and cache API operations

2018-01-09 Thread Sergey Kalashnikov (JIRA)
Sergey Kalashnikov created IGNITE-7359:
--

 Summary: SQL: DDL synchronization with query and cache API 
operations
 Key: IGNITE-7359
 URL: https://issues.apache.org/jira/browse/IGNITE-7359
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.4
Reporter: Sergey Kalashnikov
Assignee: Sergey Kalashnikov
 Fix For: 2.5


We need to add a means to synchronize DDL operations with queries and cache 
operations. This is required to facilitate future DDL improvements that would 
require to modify user data and/or some cache metadata atomically. Basically it 
is a sort of a global table lock.

One way to achieve this is to re-use a mechanism used by the exchange 
procedure. 
An exchange is waiting for all already started cache operations to end before 
proceeding itself.
Likewise, the new cache operations won't start until exchange procedure has 
completed.

However, for DDL we only need to selectively defer operations that are made on 
the same cache as DDL operation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-7333) SQL: Documentation for ALTER TABLE DROP COLUMN statement

2017-12-29 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov reassigned IGNITE-7333:
--

Assignee: Denis Magda  (was: Sergey Kalashnikov)

[~dmagda]
Denis, please review the updated page for ALTER TABLE.
https://apacheignite-sql.readme.io/v2.3/docs/alter-table_24_hidden


> SQL: Documentation for ALTER TABLE DROP COLUMN statement
> 
>
> Key: IGNITE-7333
> URL: https://issues.apache.org/jira/browse/IGNITE-7333
> Project: Ignite
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.4
>Reporter: Sergey Kalashnikov
>Assignee: Denis Magda
> Fix For: 2.4
>
>
> Add a documentation for ALTER TABLE DROP COLUMN statement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7333) SQL: Documentation for ALTER TABLE DROP COLUMN statement

2017-12-28 Thread Sergey Kalashnikov (JIRA)
Sergey Kalashnikov created IGNITE-7333:
--

 Summary: SQL: Documentation for ALTER TABLE DROP COLUMN statement
 Key: IGNITE-7333
 URL: https://issues.apache.org/jira/browse/IGNITE-7333
 Project: Ignite
  Issue Type: Task
  Components: documentation
Affects Versions: 2.4
Reporter: Sergey Kalashnikov
Assignee: Sergey Kalashnikov
 Fix For: 2.4


Add a documentation for ALTER TABLE DROP COLUMN statement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-5949) DDL: Support ALTER TABLE DROP COLUMN

2017-12-28 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305399#comment-16305399
 ] 

Sergey Kalashnikov edited comment on IGNITE-5949 at 12/28/17 12:27 PM:
---

Fixed.
1) I have added the checks to the {{prepareChangeOnNotStartedCache()}} and 
{{prepareChangeOnStartedCache()}} where other similar checks are made.
2) There are tests that check for proper error messages.
JDBC column metadata is also checked in the new tests with the help of 
{{checkTableState()}} and {{getColumnMeta()}} routines.
TC test results seem to be OK.

[~vozerov], Please take a look.


was (Author: skalashnikov):
Fixed.
1) I have added the checks to the {{prepareChangeOnNotStartedCache()}} and 
{{prepareChangeOnStartedCache()}} where other similar checks are made.
2) There are tests that check for proper error messages.
JDBC column metadata is also checked in the new tests with the help of 
{{checkTableState()}} and {{getColumnMeta()}} routines.
TC test results seem to be OK.

> DDL: Support ALTER TABLE DROP COLUMN
> 
>
> Key: IGNITE-5949
> URL: https://issues.apache.org/jira/browse/IGNITE-5949
> Project: Ignite
>  Issue Type: New Feature
>  Components: sql
>Reporter: Andrew Mashenkov
>Assignee: Sergey Kalashnikov
>  Labels: important
> Fix For: 2.4
>
>
> Ignite should support {{DROP COLUMN}} operation for {{ALTER TABLE}} command.
> Design considerations:
> 1) Drop should only be possible on binary types without schema (see 
> IGNITE-6611). Probably we will need a new option for {{CREATE TABLE}} command
> 2) Drop should not block other operations for a long time. We should 
> synchronously block the table, change meta, then release the lock and let 
> operations continue.
> 3) Actual data remove should be performed asynchronously in the same way we 
> create index. During this time we should not allow any other modifications to 
> the table
> 4) Be careful with node stop - we do not want to wait for years for this 
> command to complete.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-5623) DDL needs to support DEFAULT operator

2017-12-25 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16303299#comment-16303299
 ] 

Sergey Kalashnikov edited comment on IGNITE-5623 at 12/25/17 3:07 PM:
--

[~tledkov-gridgain], I have reviewed the changes. Overall it looks good, but I 
have few comments:
1) Missing the implementation for Jdbc metadata, namely {{getColumns()}}. The 
corresponding column for default value there is "COLUMN_DEF".
2) What will happen if the type of provided default value is not convertible to 
the type of the column? Will the exception occur and when?
3) {{GridSqlQueryParser.parseAddColumn()}} - Perhaps we should add column name 
to the exception.
4) {{BinaryFieldImpl.value()}}
You might want to move the check for zero schemaId into fieldOrder() where it 
is used already and return BinarySchema.ORDER_NOT_FOUND.
5) {{QueryEntity}}
{{equals()}} and {{hashCode()}} needs to be updated.






was (Author: skalashnikov):
[~tledkov-gridgain], I have reviewed the changes. Overall it looks good, but I 
have few comments:
1) Missing the implementation for Jdbc metadata, namely getColumns(). The 
corresponding column for default value there is "COLUMN_DEF".
2) What will happen if the type of provided default value is not convertible to 
the type of the column? Will the exception occur and when?
3) {GridSqlQueryParser.parseAddColumn()} - Perhaps we should add column name to 
the exception.
4) {BinaryFieldImpl.value()}
You might want to move the check for zero schemaId into fieldOrder() where it 
is used already and return BinarySchema.ORDER_NOT_FOUND.
5) {QueryEntity}
{equals()} and {hashCode()} needs to be updated.





> DDL needs to support DEFAULT operator 
> --
>
> Key: IGNITE-5623
> URL: https://issues.apache.org/jira/browse/IGNITE-5623
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.0
>Reporter: Denis Magda
>Assignee: Taras Ledkov
>  Labels: important
> Fix For: 2.4
>
>
> There should be a way to set a default value for a column/field if the one is 
> not specified during an insert operation. In general, we need to support 
> {{ DEFAULT }} in a way it's show below:
> {code}
> CREATE TABLE Persons (
>   ID int,
>   FirstName varchar(255),
>   Age int,
>   City varchar(255) DEFAULT 'Sandnes'
> );
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5623) DDL needs to support DEFAULT operator

2017-12-25 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16303299#comment-16303299
 ] 

Sergey Kalashnikov commented on IGNITE-5623:


[~tledkov-gridgain], I have reviewed the changes. Overall it looks good, but I 
have few comments:
1) Missing the implementation for Jdbc metadata, namely getColumns(). The 
corresponding column for default value there is "COLUMN_DEF".
2) What will happen if the type of provided default value is not convertible to 
the type of the column? Will the exception occur and when?
3) {GridSqlQueryParser.parseAddColumn()} - Perhaps we should add column name to 
the exception.
4) {BinaryFieldImpl.value()}
You might want to move the check for zero schemaId into fieldOrder() where it 
is used already and return BinarySchema.ORDER_NOT_FOUND.
5) {QueryEntity}
{equals()} and {hashCode()} needs to be updated.





> DDL needs to support DEFAULT operator 
> --
>
> Key: IGNITE-5623
> URL: https://issues.apache.org/jira/browse/IGNITE-5623
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.0
>Reporter: Denis Magda
>Assignee: Taras Ledkov
>  Labels: important
> Fix For: 2.4
>
>
> There should be a way to set a default value for a column/field if the one is 
> not specified during an insert operation. In general, we need to support 
> {{ DEFAULT }} in a way it's show below:
> {code}
> CREATE TABLE Persons (
>   ID int,
>   FirstName varchar(255),
>   Age int,
>   City varchar(255) DEFAULT 'Sandnes'
> );
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-7143) CPP: Can not insert zero decimal value with the ODBC driver.

2017-12-19 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296989#comment-16296989
 ] 

Sergey Kalashnikov commented on IGNITE-7143:


[~isapego]
Igor, the changes looks good to me.

> CPP: Can not insert zero decimal value with the ODBC driver.
> 
>
> Key: IGNITE-7143
> URL: https://issues.apache.org/jira/browse/IGNITE-7143
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.1
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Blocker
> Fix For: 2.4
>
>
> Create the following table:
> {code}
> CREATE TABLE IF NOT EXISTS TestTable (RecId varchar PRIMARY KEY, RecValue 
> DECIMAL(4,2))
> WITH "template=replicated, cache_name=TestTable_Cache";
> {code}
> Then do an ODBC insert using the OdbcParameter with the OdbcCommand object:
> {code}
> INSERT INTO TestTable (RecId, RecValue) VALUES ('1', ?)
> {code}
> The Odbc error is "The connection has been disabled." however the JVM is
> throwing this error:
> {noformat}
> [SEVERE][client-connector-#47][ClientListenerNioListener] Failed to parse
> client request.
> java.lang.ArrayIndexOutOfBoundsException: 0
>  at org.apache.ignite.internal.binary.BinaryUtils.doReadDecimal
> {noformat}
> Everything works out ok until the actual value set on the parameter is 0.
> Null works fine, values other than 0 work fine. Precision and
> Scale are set appropriately. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-7143) CPP: Can not insert zero decimal value with the ODBC driver.

2017-12-19 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296989#comment-16296989
 ] 

Sergey Kalashnikov edited comment on IGNITE-7143 at 12/19/17 3:46 PM:
--

[~isapego]
Igor, the changes look good to me.


was (Author: skalashnikov):
[~isapego]
Igor, the changes looks good to me.

> CPP: Can not insert zero decimal value with the ODBC driver.
> 
>
> Key: IGNITE-7143
> URL: https://issues.apache.org/jira/browse/IGNITE-7143
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.1
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Blocker
> Fix For: 2.4
>
>
> Create the following table:
> {code}
> CREATE TABLE IF NOT EXISTS TestTable (RecId varchar PRIMARY KEY, RecValue 
> DECIMAL(4,2))
> WITH "template=replicated, cache_name=TestTable_Cache";
> {code}
> Then do an ODBC insert using the OdbcParameter with the OdbcCommand object:
> {code}
> INSERT INTO TestTable (RecId, RecValue) VALUES ('1', ?)
> {code}
> The Odbc error is "The connection has been disabled." however the JVM is
> throwing this error:
> {noformat}
> [SEVERE][client-connector-#47][ClientListenerNioListener] Failed to parse
> client request.
> java.lang.ArrayIndexOutOfBoundsException: 0
>  at org.apache.ignite.internal.binary.BinaryUtils.doReadDecimal
> {noformat}
> Everything works out ok until the actual value set on the parameter is 0.
> Null works fine, values other than 0 work fine. Precision and
> Scale are set appropriately. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-7114) CPP: C++ node can't start without java examples folder

2017-12-11 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285987#comment-16285987
 ] 

Sergey Kalashnikov commented on IGNITE-7114:


[~isapego], I did review the code and I'm OK with the changes.

> CPP: C++ node can't start without java examples folder
> --
>
> Key: IGNITE-7114
> URL: https://issues.apache.org/jira/browse/IGNITE-7114
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 2.1
>Reporter: Evgenii Zhuravlev
>Assignee: Igor Sapego
>Priority: Critical
> Fix For: 2.4
>
> Attachments: sample.png
>
>
> Error message: 
> ERROR: Java classpath is empty (did you set {{IGNITE_HOME}} environment 
> variable?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6904) SQL: partition reservations are released too early in lazy mode

2017-12-06 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279885#comment-16279885
 ] 

Sergey Kalashnikov commented on IGNITE-6904:


[~rkondakov] Roma, I have reviewed the code and it looks good. 
Perhaps you may add a test to confirm that partition reservations are released 
in case the cursor is closed and last page is never requested. 
As far as I can tell looking at the code it is OK, but having a test would be 
great. 
[~vozerov], what do you think?

> SQL: partition reservations are released too early in lazy mode
> ---
>
> Key: IGNITE-6904
> URL: https://issues.apache.org/jira/browse/IGNITE-6904
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.3
>Reporter: Vladimir Ozerov
>Assignee: Roman Kondakov
> Fix For: 2.4
>
>
> In lazy mode we advance query execution as new page requests arrive. However, 
> method {{GridMapQueryExecutor#onQueryRequest0}} releases partition 
> reservations when only the very first page is processed:
> {code}
> finally {
> GridH2QueryContext.clearThreadLocal();
> if (distributedJoinMode == OFF)
> qctx.clearContext(false);
> }
> {code}
> It means that incorrect results may be returned on unstable topology. We need 
> to release partitions only after the whole query is executed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6876) ODBC: Add support for SQL_ATTR_CONNECTION_TIMEOUT

2017-11-20 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16259466#comment-16259466
 ] 

Sergey Kalashnikov commented on IGNITE-6876:


@Igor Sapego, I only have 2 minor comments to add:
1. SocketClient::TrySetOptions(), the case when setting non-blocking mode has 
failed.
Can you extend the message you put to diagnostic record with a remark that 
connection timeout functionality will not work because of that?

2. File attributes_test.cpp. 
Misprint in the name of the test 
BOOST_AUTO_TEST_CASE(ConnetionAttributeConnectionTimeout)

Otherwise, looks good to me. Thanks.


> ODBC: Add support for SQL_ATTR_CONNECTION_TIMEOUT
> -
>
> Key: IGNITE-6876
> URL: https://issues.apache.org/jira/browse/IGNITE-6876
> Project: Ignite
>  Issue Type: Improvement
>  Components: odbc
>Affects Versions: 2.1
>Reporter: Alexey Popov
>Assignee: Igor Sapego
> Fix For: 2.4
>
>
> Remote ODBC client should be able to request a timeout for socket 
> send/receive operations. It should be done with 
> {{SQL_ATTR_CONNECTION_TIMEOUT}} attribute.
> If an application with ODBC driver experiences a timeout for some query, it 
> can continue to work after closing the connection and establishing a new one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-5949) DDL: Support ALTER TABLE DROP COLUMN

2017-11-15 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov reassigned IGNITE-5949:
--

Assignee: Sergey Kalashnikov  (was: Alexander Paschenko)

> DDL: Support ALTER TABLE DROP COLUMN
> 
>
> Key: IGNITE-5949
> URL: https://issues.apache.org/jira/browse/IGNITE-5949
> Project: Ignite
>  Issue Type: New Feature
>  Components: sql
>Reporter: Andrew Mashenkov
>Assignee: Sergey Kalashnikov
> Fix For: 2.4
>
>
> Ignite should support {{DROP COLUMN}} operation for {{ALTER TABLE}} command.
> Design considerations:
> 1) Drop should only be possible on binary types without schema (see 
> IGNITE-6611). Probably we will need a new option for {{CREATE TABLE}} command
> 2) Drop should not block other operations for a long time. We should 
> synchronously block the table, change meta, then release the lock and let 
> operations continue.
> 3) Actual data remove should be performed asynchronously in the same way we 
> create index. During this time we should not allow any other modifications to 
> the table
> 4) Be careful with node stop - we do not want to wait for years for this 
> command to complete.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6611) Optionally disable binary metadata for type

2017-11-15 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16253200#comment-16253200
 ] 

Sergey Kalashnikov commented on IGNITE-6611:


In order to facilitate the implementation of SQL "ALTER TABLE DROP COLUMN" 
functionality the following changes are proposed to the binary metadata.

1) Add special metadata type and an API to explicitly add/remove fields rather 
than implictly pull in every new field occurring in data.
This API will be utilized by the DDL.

2) Changes in this new metadata type shall be tracked on per-cache basis. Each 
change is to be assigned a version.

3) Serialization/deserialization shall transform the old data so that it is 
complying to the current metadata version.

4) The metadata version can be attributed to the binary object with the help of 
schema id (i.e. version is to be stored in binary schema)


> Optionally disable binary metadata for type
> ---
>
> Key: IGNITE-6611
> URL: https://issues.apache.org/jira/browse/IGNITE-6611
> Project: Ignite
>  Issue Type: Task
>  Components: binary, sql
>Affects Versions: 2.3
>Reporter: Vladimir Ozerov
>Assignee: Sergey Kalashnikov
> Fix For: 2.4
>
>
> We need to introduce special metadata mode for type - without metadata. This 
> way we will have a kind of "flexible" type with no restrictions. This will be 
> especially useful for SQL-related types where schema changes are possible 
> (e.g. ADD COLUMN -> DROP COLUMN).
> Public part should be exposed to:
> 1) {{BinaryTypeConfiguration}}
> 2) {{BinaryType}} - add a flag here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6836) ODBC: Add support for SQL_ATTR_QUERY_TIMEOUT

2017-11-14 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16251391#comment-16251391
 ] 

Sergey Kalashnikov commented on IGNITE-6836:


[~isapego], Thanks. I am ok with the changes.

> ODBC: Add support for SQL_ATTR_QUERY_TIMEOUT
> 
>
> Key: IGNITE-6836
> URL: https://issues.apache.org/jira/browse/IGNITE-6836
> Project: Ignite
>  Issue Type: Improvement
>  Security Level: Public(Viewable by anyone) 
>  Components: odbc
>Affects Versions: 2.1
>Reporter: Alexey Popov
>Assignee: Igor Sapego
> Fix For: 2.4
>
>
> It would be great if we can support {{SQL_ATTR_QUERY_TIMEOUT}} at ODBC.
> That gives a flexibility to end-user code to handle long-running/timeouted 
> queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6850) SQL: integrate index inline size to CREATE INDEX syntax

2017-11-14 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16251331#comment-16251331
 ] 

Sergey Kalashnikov commented on IGNITE-6850:


[~kirill.shirokov]
I think that it is safe to use {{IgniteQueryErrorCode.Parsing}} code instead of 
{{UNKNOWN}} in {{tryQueryDistributedSqlFieldsNative()}}. Otherwise, looks good 
to me.

> SQL: integrate index inline size to CREATE INDEX syntax
> ---
>
> Key: IGNITE-6850
> URL: https://issues.apache.org/jira/browse/IGNITE-6850
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Kirill Shirokov
> Fix For: 2.4
>
>
> Index value inline is important optimization which used to minimize amount of 
> data page reads when doing index lookup (see {{InlineIndexHelper}}). 
> Currently the only way to set it is {{QueryIndex.inlineSize}} property, so it 
> cannot be set from SQL command. We need to integrate it to our SQL syntax 
> (see {{SqlCreateIndexCommand}}) and make sure it is propagated properly.
> Sample syntax:
> {code}
> CREATE INDEX idx ON tbl(field) INLINE_SIZE 20;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6835) ODBC driver should handle ungraceful tcp disconnects

2017-11-14 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16251122#comment-16251122
 ] 

Sergey Kalashnikov commented on IGNITE-6835:


[~isapego]
Please correct the misprint "Netwirking initialisation ". Otherwise, looks good.

> ODBC driver should handle ungraceful tcp disconnects
> 
>
> Key: IGNITE-6835
> URL: https://issues.apache.org/jira/browse/IGNITE-6835
> Project: Ignite
>  Issue Type: Bug
>  Security Level: Public(Viewable by anyone) 
>  Components: odbc
>Affects Versions: 2.1
>Reporter: Alexey Popov
>Assignee: Igor Sapego
>  Labels: odbc
> Fix For: 2.4
>
>
> It is found that ungraceful TCP disconnect makes ODBC driver stuck at socket 
> recv().
> Ungraceful TCP disconnect could be caused:
> 1. Network failure (or new firewall rules)
> 2. Remote party shutdown (Half Closed Connection)
> So, the proposal is:
> setup socket  options: 
> 1) SO_KEEPALIVE enabled
> 2) TCP_KEEPIDLE to 60 sec. It is 2 hour by default
> 3) TCP_KEEPINTVL to 5 (\?) sec. It is 1 sec at Win and 75 sec at Linux by 
> default.
> 4) send/receive buffers to some greater value (8k by default)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6850) SQL: integrate index inline size to CREATE INDEX syntax

2017-11-13 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16249447#comment-16249447
 ] 

Sergey Kalashnikov commented on IGNITE-6850:


[~kirill.shirokov], I have reviewed the changes. Overall it looks good, I only 
have few minor comments:

1. {{IgniteH2Indexing.tryQueryDistributedSqlFieldsNative()}}
Pretty similar error handling for both Exception and SqlParseException. Please 
consider reducing it to a function call.

2. {{AbstractSchemaSelfTest}}
There are 3 {{assertIndex()}} javadocs that have superfluous whitespace before 
first @param. 
There should be a blank line between function description and first @param.

3. {{DynamicIndexAbstractBasicSelfTest.checkNoIndexIsCreatedForInlineSize()}}
missing {{igniteQryErrorCode }} param in javadoc.


> SQL: integrate index inline size to CREATE INDEX syntax
> ---
>
> Key: IGNITE-6850
> URL: https://issues.apache.org/jira/browse/IGNITE-6850
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Kirill Shirokov
> Fix For: 2.4
>
>
> Index value inline is important optimization which used to minimize amount of 
> data page reads when doing index lookup (see {{InlineIndexHelper}}). 
> Currently the only way to set it is {{QueryIndex.inlineSize}} property, so it 
> cannot be set from SQL command. We need to integrate it to our SQL syntax 
> (see {{SqlCreateIndexCommand}}) and make sure it is propagated properly.
> Sample syntax:
> {code}
> CREATE INDEX idx ON tbl(field) INLINE_SIZE 20;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6752) JDBC thin: connection property refactoring

2017-11-10 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247761#comment-16247761
 ] 

Sergey Kalashnikov commented on IGNITE-6752:


[~tledkov-gridgain], Looks good to me.

> JDBC thin: connection property refactoring
> --
>
> Key: IGNITE-6752
> URL: https://issues.apache.org/jira/browse/IGNITE-6752
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: jdbc
>Affects Versions: 2.2
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
> Fix For: 2.4
>
>
> The issues IGNITE-6140 and IGNITE-6625 have to connection properties 
> refactoring.
> Otherwise the logic to work with connection properties is separated between 
> several classes.
> Also, SSL implementation for JDB client adds many new properties.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6835) ODBC driver should handle ungraceful tcp disconnects

2017-11-10 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247701#comment-16247701
 ] 

Sergey Kalashnikov commented on IGNITE-6835:


[~isapego]
Here are my comments:

1. Duplicate {{LOG_MSG()}} in linux variant of {{SocketClient::Connect()}}

2. {{SocketClient::TrySetOptions()}}
It would be great to have additional {{LOG_MSG}} output with 
{{errno}}/{{WSAGetLastError()}} in case of {{setsockopt}} failures.

3. Windows {{SocketClient::TrySetOption()}}

{{struct tcp_keepalive settings;}}
I would initialize it with zero (struct tcp_keepalive settings = {0};) before 
filling individual fields.



> ODBC driver should handle ungraceful tcp disconnects
> 
>
> Key: IGNITE-6835
> URL: https://issues.apache.org/jira/browse/IGNITE-6835
> Project: Ignite
>  Issue Type: Bug
>  Security Level: Public(Viewable by anyone) 
>  Components: odbc
>Affects Versions: 2.1
>Reporter: Alexey Popov
>Assignee: Igor Sapego
>  Labels: odbc
> Fix For: 2.4
>
>
> It is found that ungraceful TCP disconnect makes ODBC driver stuck at socket 
> recv().
> Ungraceful TCP disconnect could be caused:
> 1. Network failure (or new firewall rules)
> 2. Remote party shutdown (Half Closed Connection)
> So, the proposal is:
> setup socket  options: 
> 1) SO_KEEPALIVE enabled
> 2) TCP_KEEPIDLE to 60 sec. It is 2 hour by default
> 3) TCP_KEEPINTVL to 5 (\?) sec. It is 1 sec at Win and 75 sec at Linux by 
> default.
> 4) send/receive buffers to some greater value (8k by default)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6841) ODBC: Add new version for multiple result set functionality

2017-11-09 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245699#comment-16245699
 ] 

Sergey Kalashnikov commented on IGNITE-6841:


[~isapego], Looks good to me.

> ODBC: Add new version for multiple result set functionality
> ---
>
> Key: IGNITE-6841
> URL: https://issues.apache.org/jira/browse/IGNITE-6841
> Project: Ignite
>  Issue Type: Bug
>  Security Level: Public(Viewable by anyone) 
>  Components: odbc
>Affects Versions: 2.3
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>  Labels: odbc
> Fix For: 2.4
>
>
> Changes made in IGNITE-6357 changed ODBC protocol, but protocol version was 
> not increased. Need to fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (IGNITE-6276) SQL: Investigate parser generators

2017-11-08 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov resolved IGNITE-6276.

Resolution: Won't Fix

> SQL: Investigate parser generators
> --
>
> Key: IGNITE-6276
> URL: https://issues.apache.org/jira/browse/IGNITE-6276
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
> Fix For: 2.4
>
> Attachments: IGNITE-6276.patch, antlr4-ignite.zip
>
>
> Now ignite relies on H2 for SQL processing. It has been discussed many times 
> on dev list that we must start introducing our own SQL core in small 
> incremental steps. 
> Let's start with analyzing the options for implementing the parser part.
> We may begin with http://www.antlr.org/ and create a simple separate project 
> that would generate the parser for some simple DDL commands like DROP INDEX.
> This will give us a hint on the complexity and limitations of the approach.
> 1) Set up Maven/ANTLR.
> 2) Prepare lexer/parser.
> 3) Generate.
> 4) Write a test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-6276) SQL: Investigate parser generators

2017-11-08 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244102#comment-16244102
 ] 

Sergey Kalashnikov edited comment on IGNITE-6276 at 11/8/17 3:15 PM:
-

We have identified several shortcomings with the generated parser approach.
- Error messages generated by parser are not customizable enough to the point 
when user may easily understand them. I doubt a user will understand messages 
like 'no viable alternative'.
- The ANTLR lexer cannot be controlled by parser making many useful things 
impossible.
- The performance assesment results aren't really great in terms of scalability.

I have attached a patch for history purposes.


was (Author: skalashnikov):
We have identified several shortcomings with the generated parser approach.
- Error messages generated by parser are not customizable enough to the point 
when user may easily understand them. I doubt a user will understand messages 
like 'no viable alternative'.
- The ANTLR lexer cannot be controlled by parser making many useful things 
impossible.
- The performance assesment results aren't really great in terms of scalability.

> SQL: Investigate parser generators
> --
>
> Key: IGNITE-6276
> URL: https://issues.apache.org/jira/browse/IGNITE-6276
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
> Fix For: 2.4
>
> Attachments: IGNITE-6276.patch, antlr4-ignite.zip
>
>
> Now ignite relies on H2 for SQL processing. It has been discussed many times 
> on dev list that we must start introducing our own SQL core in small 
> incremental steps. 
> Let's start with analyzing the options for implementing the parser part.
> We may begin with http://www.antlr.org/ and create a simple separate project 
> that would generate the parser for some simple DDL commands like DROP INDEX.
> This will give us a hint on the complexity and limitations of the approach.
> 1) Set up Maven/ANTLR.
> 2) Prepare lexer/parser.
> 3) Generate.
> 4) Write a test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (IGNITE-6320) SQL: ANTLR performance assessment

2017-11-08 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov resolved IGNITE-6320.

Resolution: Won't Fix

> SQL: ANTLR performance assessment
> -
>
> Key: IGNITE-6320
> URL: https://issues.apache.org/jira/browse/IGNITE-6320
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Sergey Kalashnikov
> Fix For: 2.4
>
>
> Proposed process:
> 1) Download MySQL grammar [1]
> 2) Generate parser
> 3) Measure parsing performance for both simple and complex SQL queries with 
> JMH.
> 4) Analyze the numbers
> [1] https://github.com/antlr/grammars-v4/tree/master/mysql



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (IGNITE-6276) SQL: Investigate parser generators

2017-11-08 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov closed IGNITE-6276.
--

> SQL: Investigate parser generators
> --
>
> Key: IGNITE-6276
> URL: https://issues.apache.org/jira/browse/IGNITE-6276
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
> Fix For: 2.4
>
> Attachments: IGNITE-6276.patch, antlr4-ignite.zip
>
>
> Now ignite relies on H2 for SQL processing. It has been discussed many times 
> on dev list that we must start introducing our own SQL core in small 
> incremental steps. 
> Let's start with analyzing the options for implementing the parser part.
> We may begin with http://www.antlr.org/ and create a simple separate project 
> that would generate the parser for some simple DDL commands like DROP INDEX.
> This will give us a hint on the complexity and limitations of the approach.
> 1) Set up Maven/ANTLR.
> 2) Prepare lexer/parser.
> 3) Generate.
> 4) Write a test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6276) SQL: Investigate parser generators

2017-11-08 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-6276:
---
Attachment: IGNITE-6276.patch

> SQL: Investigate parser generators
> --
>
> Key: IGNITE-6276
> URL: https://issues.apache.org/jira/browse/IGNITE-6276
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
> Fix For: 2.4
>
> Attachments: IGNITE-6276.patch, antlr4-ignite.zip
>
>
> Now ignite relies on H2 for SQL processing. It has been discussed many times 
> on dev list that we must start introducing our own SQL core in small 
> incremental steps. 
> Let's start with analyzing the options for implementing the parser part.
> We may begin with http://www.antlr.org/ and create a simple separate project 
> that would generate the parser for some simple DDL commands like DROP INDEX.
> This will give us a hint on the complexity and limitations of the approach.
> 1) Set up Maven/ANTLR.
> 2) Prepare lexer/parser.
> 3) Generate.
> 4) Write a test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6276) SQL: Investigate parser generators

2017-11-08 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244102#comment-16244102
 ] 

Sergey Kalashnikov commented on IGNITE-6276:


We have identified several shortcomings with the generated parser approach.
- Error messages generated by parser are not customizable enough to the point 
when user may easily understand them. I doubt a user will understand messages 
like 'no viable alternative'.
- The ANTLR lexer cannot be controlled by parser making many useful things 
impossible.
- The performance assesment results aren't really great in terms of scalability.

> SQL: Investigate parser generators
> --
>
> Key: IGNITE-6276
> URL: https://issues.apache.org/jira/browse/IGNITE-6276
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
> Fix For: 2.4
>
> Attachments: antlr4-ignite.zip
>
>
> Now ignite relies on H2 for SQL processing. It has been discussed many times 
> on dev list that we must start introducing our own SQL core in small 
> incremental steps. 
> Let's start with analyzing the options for implementing the parser part.
> We may begin with http://www.antlr.org/ and create a simple separate project 
> that would generate the parser for some simple DDL commands like DROP INDEX.
> This will give us a hint on the complexity and limitations of the approach.
> 1) Set up Maven/ANTLR.
> 2) Prepare lexer/parser.
> 3) Generate.
> 4) Write a test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-6611) Optionally disable binary metadata for type

2017-11-07 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov reassigned IGNITE-6611:
--

Assignee: Sergey Kalashnikov  (was: Alexander Paschenko)

> Optionally disable binary metadata for type
> ---
>
> Key: IGNITE-6611
> URL: https://issues.apache.org/jira/browse/IGNITE-6611
> Project: Ignite
>  Issue Type: Task
>  Components: binary, sql
>Affects Versions: 2.3
>Reporter: Vladimir Ozerov
>Assignee: Sergey Kalashnikov
> Fix For: 2.4
>
>
> We need to introduce special metadata mode for type - without metadata. This 
> way we will have a kind of "flexible" type with no restrictions. This will be 
> especially useful for SQL-related types where schema changes are possible 
> (e.g. ADD COLUMN -> DROP COLUMN).
> Public part should be exposed to:
> 1) {{BinaryTypeConfiguration}}
> 2) {{BinaryType}} - add a flag here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6320) SQL: ANTLR performance assessment

2017-10-17 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207666#comment-16207666
 ] 

Sergey Kalashnikov commented on IGNITE-6320:


I've made some benchmarking with parsers generated by ANTLR with MySQL and P/L 
SQL grammars versus H2 prepared statement.
Also included the ANTLR based parser prototype I created earlier for 
.

The measurements for ANTLR parsers were made with two stage parsing strategy 
(SLL prediction mode).

Here are the results:

||Benchmark||ops/s||
|Complex Query ANTLR/MySQL Grammar|7830,615|
|Complex Query ANTLR/MySQL(Lexer only)|53441,816|
|Complex Query ANTLR/PL-SQL Grammar|3310,900|
|Complex Query H2|25368,322|
|Simple Query ANTLR/MySQL Grammar|28813,159|
|Simple Query ANTLR/PL-SQL Grammar|12581,615|
|Simple Query H2|118872,767|
|Trivial Query ANTLR/MySQL Grammar|120041,528|
|Trivial Query ANTLR/PL-SQL Grammar|63138,856|
|Trivial Query H2|546905,758|
|Drop index ANTLR/MySQL Grammar|350599,019|
|Drop index H2|2373889,332|
|Drop index ANTLR/IgniteProto|474410,677|
|Drop index JFLEX+BYACC/IgniteProto|389347,251|
|Batched Queries 1 Thread ANTLR/MySQL|35,256|
|Batched Queries 2 Threads ANTLR/MySQL|60,010|
|Batched Queries 4 Threads ANTLR/MySQL|95,171|
|Batched Queries 8 Threads ANLR/MySQL|135,311|

It looks like ANTLR parser doesn't scale well enough in multi-threaded 
environment, although single thread performance is good.



> SQL: ANTLR performance assessment
> -
>
> Key: IGNITE-6320
> URL: https://issues.apache.org/jira/browse/IGNITE-6320
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Sergey Kalashnikov
> Fix For: 2.4
>
>
> Proposed process:
> 1) Download MySQL grammar [1]
> 2) Generate parser
> 3) Measure parsing performance for both simple and complex SQL queries with 
> JMH.
> 4) Analyze the numbers
> [1] https://github.com/antlr/grammars-v4/tree/master/mysql



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6024) SQL: execute DML statements on the server when possible

2017-10-11 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200095#comment-16200095
 ] 

Sergey Kalashnikov commented on IGNITE-6024:


[~al.psc], Aggree for p1. Fixed it.

> SQL: execute DML statements on the server when possible
> ---
>
> Key: IGNITE-6024
> URL: https://issues.apache.org/jira/browse/IGNITE-6024
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.1
>Reporter: Vladimir Ozerov
>Assignee: Sergey Kalashnikov
>  Labels: important, performance
> Fix For: 2.3
>
>
> Currently we execute DML statements as follows:
> 1) Get query result set to the client
> 2) Construct entry processors and send them to servers in batches
> This approach is inefficient as it causes a lot of unnecessary network 
> communication  Instead, we should execute DML statements directly on server 
> nodes when it is possible.
> Implementation considerations:
> 1) Determine set of queries which could be processed in this way. E.g., 
> {{LIMIT/OFFSET}}, {{GROUP BY}}, {{ORDER BY}}, {{DISTINCT}}, etc. are out of 
> question - they must go through the client anyway. Probably 
> {{skipMergeTable}} flag is a good starting point (good, not precise!)
> 2) Send request to every server and execute local DML right there
> 3) No failover support at the moment - throw "partial update" exception if 
> topology is unstable
> 4) Handle partition reservation carefully
> 5) Transactions: we still have single coordinator - this is a client. When 
> MVCC and TX SQL is ready, client will assign proper counters to server 
> requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6024) SQL: execute DML statements on the server when possible

2017-10-10 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199042#comment-16199042
 ] 

Sergey Kalashnikov commented on IGNITE-6024:


[~al.psc], [~vozerov]
I have fixed points 1 and 3.

> SQL: execute DML statements on the server when possible
> ---
>
> Key: IGNITE-6024
> URL: https://issues.apache.org/jira/browse/IGNITE-6024
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.1
>Reporter: Vladimir Ozerov
>Assignee: Sergey Kalashnikov
>  Labels: important, performance
> Fix For: 2.3
>
>
> Currently we execute DML statements as follows:
> 1) Get query result set to the client
> 2) Construct entry processors and send them to servers in batches
> This approach is inefficient as it causes a lot of unnecessary network 
> communication  Instead, we should execute DML statements directly on server 
> nodes when it is possible.
> Implementation considerations:
> 1) Determine set of queries which could be processed in this way. E.g., 
> {{LIMIT/OFFSET}}, {{GROUP BY}}, {{ORDER BY}}, {{DISTINCT}}, etc. are out of 
> question - they must go through the client anyway. Probably 
> {{skipMergeTable}} flag is a good starting point (good, not precise!)
> 2) Send request to every server and execute local DML right there
> 3) No failover support at the moment - throw "partial update" exception if 
> topology is unstable
> 4) Handle partition reservation carefully
> 5) Transactions: we still have single coordinator - this is a client. When 
> MVCC and TX SQL is ready, client will assign proper counters to server 
> requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6350) SQL: Forbid NOT NULL constraints usage for a cache with configured read-through cache store

2017-10-02 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188510#comment-16188510
 ] 

Sergey Kalashnikov commented on IGNITE-6350:


[~vozerov], tests seem to be OK. I have resolved .NET test issue the way Pavel 
has suggested.

> SQL: Forbid NOT NULL constraints usage for a cache with configured 
> read-through cache store
> ---
>
> Key: IGNITE-6350
> URL: https://issues.apache.org/jira/browse/IGNITE-6350
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
> Fix For: 2.3
>
>
> We need to throw an exception when user attempts to create or alter a table 
> with a field declared as NOT NULL in a case the corresponding cache 
> configuration employs read-through cache store 
> {{CacheConfiguration.setReadThrough()}}.
> These features seem to not fit together quite well, so we skip the support at 
> the moment and the check will keep things consistent.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6024) SQL: execute DML statements on the server when possible

2017-10-02 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188493#comment-16188493
 ] 

Sergey Kalashnikov commented on IGNITE-6024:


[~vozerov], I have applied your comments. Please take a look again.

> SQL: execute DML statements on the server when possible
> ---
>
> Key: IGNITE-6024
> URL: https://issues.apache.org/jira/browse/IGNITE-6024
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.1
>Reporter: Vladimir Ozerov
>Assignee: Sergey Kalashnikov
>  Labels: important, performance
> Fix For: 2.3
>
>
> Currently we execute DML statements as follows:
> 1) Get query result set to the client
> 2) Construct entry processors and send them to servers in batches
> This approach is inefficient as it causes a lot of unnecessary network 
> communication  Instead, we should execute DML statements directly on server 
> nodes when it is possible.
> Implementation considerations:
> 1) Determine set of queries which could be processed in this way. E.g., 
> {{LIMIT/OFFSET}}, {{GROUP BY}}, {{ORDER BY}}, {{DISTINCT}}, etc. are out of 
> question - they must go through the client anyway. Probably 
> {{skipMergeTable}} flag is a good starting point (good, not precise!)
> 2) Send request to every server and execute local DML right there
> 3) No failover support at the moment - throw "partial update" exception if 
> topology is unstable
> 4) Handle partition reservation carefully
> 5) Transactions: we still have single coordinator - this is a client. When 
> MVCC and TX SQL is ready, client will assign proper counters to server 
> requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6350) SQL: Forbid NOT NULL constraints usage for a cache with configured read-through cache store

2017-10-02 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188004#comment-16188004
 ] 

Sergey Kalashnikov commented on IGNITE-6350:


[~vozerov], thanks. I have applied your comments. Please take a look.

> SQL: Forbid NOT NULL constraints usage for a cache with configured 
> read-through cache store
> ---
>
> Key: IGNITE-6350
> URL: https://issues.apache.org/jira/browse/IGNITE-6350
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
> Fix For: 2.3
>
>
> We need to throw an exception when user attempts to create or alter a table 
> with a field declared as NOT NULL in a case the corresponding cache 
> configuration employs read-through cache store 
> {{CacheConfiguration.setReadThrough()}}.
> These features seem to not fit together quite well, so we skip the support at 
> the moment and the check will keep things consistent.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (IGNITE-6387) SQL: NOT NULL fields validation with read-through cache store

2017-09-29 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov closed IGNITE-6387.
--

> SQL: NOT NULL fields validation with read-through cache store
> -
>
> Key: IGNITE-6387
> URL: https://issues.apache.org/jira/browse/IGNITE-6387
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
> Fix For: 2.3
>
> Attachments: ignite-6387.patch
>
>
> There is a case left unsolved during implementation of SQL NOT NULL 
> constraints feature.
> It may happen so that cache update operation fails and the value loaded from 
> store is put into cache.
> This value must also be validated with regards to configured NOT NULL 
> constraints.
> See {{CacheConfiguration.setCacheStoreFactory()}}, 
> {{CacheConfiguration.setReadThrough()}}, {{QueryEntity.setNotNullFields()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (IGNITE-6387) SQL: NOT NULL fields validation with read-through cache store

2017-09-29 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov resolved IGNITE-6387.

Resolution: Won't Fix

> SQL: NOT NULL fields validation with read-through cache store
> -
>
> Key: IGNITE-6387
> URL: https://issues.apache.org/jira/browse/IGNITE-6387
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
> Fix For: 2.3
>
> Attachments: ignite-6387.patch
>
>
> There is a case left unsolved during implementation of SQL NOT NULL 
> constraints feature.
> It may happen so that cache update operation fails and the value loaded from 
> store is put into cache.
> This value must also be validated with regards to configured NOT NULL 
> constraints.
> See {{CacheConfiguration.setCacheStoreFactory()}}, 
> {{CacheConfiguration.setReadThrough()}}, {{QueryEntity.setNotNullFields()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6387) SQL: NOT NULL fields validation with read-through cache store

2017-09-29 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-6387:
---
Attachment: ignite-6387.patch

> SQL: NOT NULL fields validation with read-through cache store
> -
>
> Key: IGNITE-6387
> URL: https://issues.apache.org/jira/browse/IGNITE-6387
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
> Fix For: 2.3
>
> Attachments: ignite-6387.patch
>
>
> There is a case left unsolved during implementation of SQL NOT NULL 
> constraints feature.
> It may happen so that cache update operation fails and the value loaded from 
> store is put into cache.
> This value must also be validated with regards to configured NOT NULL 
> constraints.
> See {{CacheConfiguration.setCacheStoreFactory()}}, 
> {{CacheConfiguration.setReadThrough()}}, {{QueryEntity.setNotNullFields()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6024) SQL: execute DML statements on the server when possible

2017-09-29 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185487#comment-16185487
 ] 

Sergey Kalashnikov commented on IGNITE-6024:


Refreshed from master.

[~vozerov], Please take a look.


> SQL: execute DML statements on the server when possible
> ---
>
> Key: IGNITE-6024
> URL: https://issues.apache.org/jira/browse/IGNITE-6024
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.1
>Reporter: Vladimir Ozerov
>Assignee: Sergey Kalashnikov
>  Labels: important, performance
> Fix For: 2.3
>
>
> Currently we execute DML statements as follows:
> 1) Get query result set to the client
> 2) Construct entry processors and send them to servers in batches
> This approach is inefficient as it causes a lot of unnecessary network 
> communication  Instead, we should execute DML statements directly on server 
> nodes when it is possible.
> Implementation considerations:
> 1) Determine set of queries which could be processed in this way. E.g., 
> {{LIMIT/OFFSET}}, {{GROUP BY}}, {{ORDER BY}}, {{DISTINCT}}, etc. are out of 
> question - they must go through the client anyway. Probably 
> {{skipMergeTable}} flag is a good starting point (good, not precise!)
> 2) Send request to every server and execute local DML right there
> 3) No failover support at the moment - throw "partial update" exception if 
> topology is unstable
> 4) Handle partition reservation carefully
> 5) Transactions: we still have single coordinator - this is a client. When 
> MVCC and TX SQL is ready, client will assign proper counters to server 
> requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6511) ODBC: SQLGetDiagRec doesn't follow specification when buffer size is too small

2017-09-27 Thread Sergey Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182469#comment-16182469
 ] 

Sergey Kalashnikov commented on IGNITE-6511:


Workaround: check resulting error message length (reallen > 0) before 
manipulation with the error message pointer.

Reproducer:

{quote}
BOOST_AUTO_TEST_CASE(TestLongErrorMessage)
{
StartAdditionalNode("Node1");

Connect("DRIVER={Apache Ignite};ADDRESS=127.0.0.1:0;SCHEMA=PUBLIC");

SQLCHAR req[] = "DROP INDEX Nonexisting";

SQLRETURN ret;

ret = SQLExecDirect(stmt, req, SQL_NTS);

BOOST_REQUIRE_EQUAL(ret, SQL_ERROR);

SQLCHAR sqlstate[7] = {};
SQLINTEGER nativeCode;

SQLCHAR message[10];
SQLSMALLINT reallen = 0;

SQLGetDiagRec(SQL_HANDLE_STMT, stmt, 1, sqlstate, , message, 
sizeof(message), );

BOOST_CHECK_EQUAL(reallen, sizeof(message));
}
{quote}

> ODBC: SQLGetDiagRec doesn't follow specification when buffer size is too small
> --
>
> Key: IGNITE-6511
> URL: https://issues.apache.org/jira/browse/IGNITE-6511
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Reporter: Sergey Kalashnikov
>  Labels: usability
>
> When buffer size provided for error message is not big enough to hold the 
> entire error message, the function {{SqlGetDiagRec()}} returns wrong 
> resulting string length (-4) and wrong result code ({{SQL_SUCCESS}} instead 
> of {{SQL_SUCCESS_WITH_INFO}}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6511) ODBC: SQLGetDiagRec doesn't follow specification when buffer size is too small

2017-09-27 Thread Sergey Kalashnikov (JIRA)
Sergey Kalashnikov created IGNITE-6511:
--

 Summary: ODBC: SQLGetDiagRec doesn't follow specification when 
buffer size is too small
 Key: IGNITE-6511
 URL: https://issues.apache.org/jira/browse/IGNITE-6511
 Project: Ignite
  Issue Type: Bug
  Components: odbc
Reporter: Sergey Kalashnikov


When buffer size provided for error message is not big enough to hold the 
entire error message, the function {noformat}SqlGetDiagRec{noformat} returns 
wrong resulting string length (-4) and wrong result code 
({noformat}SQL_SUCCESS{noformat} instead of 
{noformat}SQL_SUCCESS_WITH_INFO{noformat}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6511) ODBC: SQLGetDiagRec doesn't follow specification when buffer size is too small

2017-09-27 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-6511:
---
Description: When buffer size provided for error message is not big enough 
to hold the entire error message, the function {{SqlGetDiagRec()}} returns 
wrong resulting string length (-4) and wrong result code ({{SQL_SUCCESS}} 
instead of {{SQL_SUCCESS_WITH_INFO}}).  (was: When buffer size provided for 
error message is not big enough to hold the entire error message, the function 
{{SqlGetDiagRec}} returns wrong resulting string length (-4) and wrong result 
code ({{SQL_SUCCESS}} instead of {{SQL_SUCCESS_WITH_INFO}}).)

> ODBC: SQLGetDiagRec doesn't follow specification when buffer size is too small
> --
>
> Key: IGNITE-6511
> URL: https://issues.apache.org/jira/browse/IGNITE-6511
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Reporter: Sergey Kalashnikov
>  Labels: usability
>
> When buffer size provided for error message is not big enough to hold the 
> entire error message, the function {{SqlGetDiagRec()}} returns wrong 
> resulting string length (-4) and wrong result code ({{SQL_SUCCESS}} instead 
> of {{SQL_SUCCESS_WITH_INFO}}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6511) ODBC: SQLGetDiagRec doesn't follow specification when buffer size is too small

2017-09-27 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-6511:
---
Description: When buffer size provided for error message is not big enough 
to hold the entire error message, the function {{SqlGetDiagRec}} returns wrong 
resulting string length (-4) and wrong result code ({{SQL_SUCCESS}} instead of 
{{SQL_SUCCESS_WITH_INFO}}).  (was: When buffer size provided for error message 
is not big enough to hold the entire error message, the function 
{noformat}SqlGetDiagRec{noformat} returns wrong resulting string length (-4) 
and wrong result code ({{SQL_SUCCESS}} instead of {{SQL_SUCCESS_WITH_INFO}}).)

> ODBC: SQLGetDiagRec doesn't follow specification when buffer size is too small
> --
>
> Key: IGNITE-6511
> URL: https://issues.apache.org/jira/browse/IGNITE-6511
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Reporter: Sergey Kalashnikov
>  Labels: usability
>
> When buffer size provided for error message is not big enough to hold the 
> entire error message, the function {{SqlGetDiagRec}} returns wrong resulting 
> string length (-4) and wrong result code ({{SQL_SUCCESS}} instead of 
> {{SQL_SUCCESS_WITH_INFO}}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6511) ODBC: SQLGetDiagRec doesn't follow specification when buffer size is too small

2017-09-27 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-6511:
---
Description: When buffer size provided for error message is not big enough 
to hold the entire error message, the function 
{noformat}SqlGetDiagRec{noformat} returns wrong resulting string length (-4) 
and wrong result code ({{SQL_SUCCESS}} instead of {{SQL_SUCCESS_WITH_INFO}}).  
(was: When buffer size provided for error message is not big enough to hold the 
entire error message, the function {noformat}SqlGetDiagRec{noformat} returns 
wrong resulting string length (-4) and wrong result code 
({noformat}SQL_SUCCESS{noformat} instead of 
{noformat}SQL_SUCCESS_WITH_INFO{noformat}).)

> ODBC: SQLGetDiagRec doesn't follow specification when buffer size is too small
> --
>
> Key: IGNITE-6511
> URL: https://issues.apache.org/jira/browse/IGNITE-6511
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Reporter: Sergey Kalashnikov
>  Labels: usability
>
> When buffer size provided for error message is not big enough to hold the 
> entire error message, the function {noformat}SqlGetDiagRec{noformat} returns 
> wrong resulting string length (-4) and wrong result code ({{SQL_SUCCESS}} 
> instead of {{SQL_SUCCESS_WITH_INFO}}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   >