[jira] [Created] (IGNITE-4496) Review all logging for sensitive data leak

2016-12-26 Thread Alexandr Kuramshin (JIRA)
Alexandr Kuramshin created IGNITE-4496:
--

 Summary: Review all logging for sensitive data leak
 Key: IGNITE-4496
 URL: https://issues.apache.org/jira/browse/IGNITE-4496
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexandr Kuramshin
Assignee: Alexandr Kuramshin


While sensitive logging option added and toString() methods fixed, not all 
logging was checked for sensitive data leak



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-4167) Add an option to avoid printing out sensitive data into logs

2016-12-26 Thread Alexandr Kuramshin (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandr Kuramshin resolved IGNITE-4167.

Resolution: Fixed

Option added, toString() methods updated.
Basic key/value logging fixed (upon cache operations).

Some sensitive logging may still occur (rare), until all logging will be 
checked.

> Add an option to avoid printing out sensitive data into logs
> 
>
> Key: IGNITE-4167
> URL: https://issues.apache.org/jira/browse/IGNITE-4167
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Kholodov
>Assignee: Alexandr Kuramshin
>
>
> We are seeing sensitive cache data being output in ignite debug logging. I've 
> tracked it down to at least two places:
> 1. GridToStringBuilder uses reflection to print all fields in cache objects 
> that are not annotated with @GridToStringExclude
> 2. GridCacheMapEntry does a direct toString() call on the value objects in a 
> debug log
> As a fabric platform, we won't always have control over the object classes 
> being added to/retrieved from the cache.
> We must always assume that all keys and values are sensitive and should not 
> be outputted in logs except in local debugging situations. To this end, we 
> need a configuration option (turned OFF by default) that allows keys/values 
> to be written to log messages.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4472) Web Console: Implement usage tracing

2016-12-26 Thread Dmitriy Shabalin (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779625#comment-15779625
 ] 

Dmitriy Shabalin commented on IGNITE-4472:
--

Added statistics page

> Web Console: Implement usage tracing
> 
>
> Key: IGNITE-4472
> URL: https://issues.apache.org/jira/browse/IGNITE-4472
> Project: Ignite
>  Issue Type: Task
>Reporter: Dmitriy Shabalin
>Assignee: Dmitriy Shabalin
>
> We need to track some kind of "activity scores" for users.
> Activities:
> # Configuration
> # SQL
> # Demo
> And show that score on admin panel.
> May be this could be implemented by some library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4495) .NET: Support user-defined identity resolvers

2016-12-26 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4495:

Labels: .NET  (was: .NET DML)

> .NET: Support user-defined identity resolvers
> -
>
> Key: IGNITE-4495
> URL: https://issues.apache.org/jira/browse/IGNITE-4495
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms, SQL
>Reporter: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 2.0
>
>
> See {{BinaryTypeConfiguration.EqualityComparer}}: it supports predefined 
> implementations only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4495) .NET: Support user-defined identity resolvers

2016-12-26 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4495:

Affects Version/s: (was: 2.0)

> .NET: Support user-defined identity resolvers
> -
>
> Key: IGNITE-4495
> URL: https://issues.apache.org/jira/browse/IGNITE-4495
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms, SQL
>Reporter: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 2.0
>
>
> See {{BinaryTypeConfiguration.EqualityComparer}}: it supports predefined 
> implementations only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4495) .NET: Support user-defined identity resolvers

2016-12-26 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4495:

Component/s: SQL

> .NET: Support user-defined identity resolvers
> -
>
> Key: IGNITE-4495
> URL: https://issues.apache.org/jira/browse/IGNITE-4495
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms, SQL
>Reporter: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 2.0
>
>
> See {{BinaryTypeConfiguration.EqualityComparer}}: it supports predefined 
> implementations only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4495) .NET: Support user-defined identity resolvers

2016-12-26 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4495:

Fix Version/s: (was: 2.3)
   2.0

> .NET: Support user-defined identity resolvers
> -
>
> Key: IGNITE-4495
> URL: https://issues.apache.org/jira/browse/IGNITE-4495
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms, SQL
>Reporter: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 2.0
>
>
> See {{BinaryTypeConfiguration.EqualityComparer}}: it supports predefined 
> implementations only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4157) Use discovery custom messages instead of marshaller cache

2016-12-26 Thread Sergey Chugunov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778838#comment-15778838
 ] 

Sergey Chugunov commented on IGNITE-4157:
-

Splitted *DiscoveryDataContainer* class into two parts: one for public API, 
another for internal implementation. Working on moving functionality.

> Use discovery custom messages instead of marshaller cache
> -
>
> Key: IGNITE-4157
> URL: https://issues.apache.org/jira/browse/IGNITE-4157
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Alexey Goncharuk
>Assignee: Sergey Chugunov
> Fix For: 2.0
>
>
> Currently we use system caches for keeping classname to class ID mapping and 
> for storing binary metadata
> This has several serious disadvantages:
> 1) We need to introduce at least two additional thread pools for each of 
> these caches
> 2) Since cache operations require stable topology, registering a class ID or 
> updating metadata inside a transaction or another cache operation is tricky 
> and deadlock-prone.
> 3) It may be beneficial in some cases to have nodes with no caches at all, 
> currently this is impossible because system caches are always present.
> 4) Reading binary metadata leads to huge local contention, caching metadata 
> values in a local map doubles memory consumption
> I suggest we use discovery custom events for these purposes. Each node will 
> have a corresponding local map (state) which will be updated inside custom 
> event handler. From the first point of view, this should remove all the 
> disadvantages above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4003) Slow or faulty client can stall the whole cluster.

2016-12-26 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778835#comment-15778835
 ] 

Andrey Gura commented on IGNITE-4003:
-

Added tests for scenario when client node should be failed if communication 
fails to establish connection.

> Slow or faulty client can stall the whole cluster.
> --
>
> Key: IGNITE-4003
> URL: https://issues.apache.org/jira/browse/IGNITE-4003
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, general
>Affects Versions: 1.7
>Reporter: Vladimir Ozerov
>Assignee: Andrey Gura
>Priority: Critical
> Fix For: 2.0
>
>
> Steps to reproduce:
> 1) Start two server nodes and some data to cache.
> 2) Start a client from Docker subnet, which is not visible from the outside. 
> Client will join the cluster.
> 3) Try to put something to cache or start another node to force rabalance.
> Cluster is stuck at this moment. Root cause - servers are constantly trying 
> to establish outgoing connection to the client, but fail as Docker subnet is 
> not visible from the outside. It may stop virtually all cluster operations.
> Typical thread dump:
> {code}
> org.apache.ignite.IgniteCheckedException: Failed to send message (node may 
> have left the grid or TCP connection cannot be established due to firewall 
> issues) [node=TcpDiscoveryNode [id=a15d74c2-1ec2-4349-9640-aeacd70d8714, 
> addrs=[127.0.0.1, 172.17.0.6], sockAddrs=[/127.0.0.1:0, /127.0.0.1:0, 
> /172.17.0.6:0], discPort=0, order=7241, intOrder=3707, 
> lastExchangeTime=1474096941045, loc=false, ver=1.5.23#20160526-sha1:259146da, 
> isClient=true], topic=T4 [topic=TOPIC_CACHE, 
> id1=949732fd-1360-3a58-8d9e-0ff6ea6182cc, 
> id2=a15d74c2-1ec2-4349-9640-aeacd70d8714, id3=2], msg=GridContinuousMessage 
> [type=MSG_EVT_NOTIFICATION, routineId=7e13c48e-6933-48b2-9f15-8d92007930db, 
> data=null, futId=null], policy=2]
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1129)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.sendOrderedMessage(GridIoManager.java:1347)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1227)
>  ~[ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1198)
>  ~[ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1180)
>  ~[ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendNotification(GridContinuousProcessor.java:841)
>  ~[ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.addNotification(GridContinuousProcessor.java:800)
>  ~[ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.onEntryUpdate(CacheContinuousQueryHandler.java:787)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.access$700(CacheContinuousQueryHandler.java:91)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler$1.onEntryUpdated(CacheContinuousQueryHandler.java:412)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager.onEntryUpdated(CacheContinuousQueryManager.java:343)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager.onEntryUpdated(CacheContinuousQueryManager.java:250)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:3476)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtForceKeysFuture$MiniFuture.onResult(GridDhtForceKeysFuture.java:548)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtForceKeysFuture.onResult(GridDhtForceKeysFuture.java:207)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.processForceKeyResponse(GridDhtPreloader.java:636)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> 

[jira] [Commented] (IGNITE-4147) SSL misconfiguration is not handled properly

2016-12-26 Thread Valentin Kulichenko (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778747#comment-15778747
 ] 

Valentin Kulichenko commented on IGNITE-4147:
-

Fail fast is correct behavior here, I think.

> SSL misconfiguration is not handled properly
> 
>
> Key: IGNITE-4147
> URL: https://issues.apache.org/jira/browse/IGNITE-4147
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 1.7
>Reporter: Valentin Kulichenko
>Assignee: Dmitry Karachentsev
> Fix For: 2.0
>
>
> Currently, if one tries to start a node with incorrect SSL configuration 
> (i.e. node with SSL disabled tries to connect to secured cluster or other way 
> around), this new node does not show any exceptions or warnings. Instead user 
> may end up having two topologies - one secured and another non-secured.
> This is counterintuitive. We should try to detect this and throw an exception 
> in this case.
> Here are possible scenarios that we should consider:
> * Unsecured server tries to join secured cluster.
> * Secured server tries to join unsecured cluster.
> * Unsecured client tries to join secured cluster.
> * Secured client tries to join unsecured cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-4045) .NET: Support DML API

2016-12-26 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778622#comment-15778622
 ] 

Pavel Tupitsyn edited comment on IGNITE-4045 at 12/26/16 6:03 PM:
--

2) We don't need this. Identity resolver in .NET is needed only to write 
correct hash code to a stream. Enums are written as [typeId, value]. There is 
no hash code.
3) There is already a ticket: IGNITE-4397. In current ticket FieldComparer is 
hidden from public API, please ignore it.
4) I agree. This is done intentionally because we are on a very hot path here 
and want to avoid any allocations or overhead associated with {{ReadByte}} and 
{{ReadByteArray}}. I've refactored this to a generic {{IBinaryStream.Apply}} 
method which allows operating on byte pointer directly without allocations.
5) IGNITE-4495


was (Author: ptupitsyn):
3) There is already a ticket: IGNITE-4397. In current ticket FieldComparer is 
hidden from public API, please ignore it.
4) I agree. This is done intentionally because we are on a very hot path here 
and want to avoid any allocations or overhead associated with {{ReadByte}} and 
{{ReadByteArray}}. I've refactored this to a generic {{IBinaryStream.Apply}} 
method which allows operating on byte pointer directly without allocations.
5) IGNITE-4495

> .NET: Support DML API
> -
>
> Key: IGNITE-4045
> URL: https://issues.apache.org/jira/browse/IGNITE-4045
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Reporter: Denis Magda
>Assignee: Pavel Tupitsyn
>  Labels: roadmap
> Fix For: 2.0
>
>
> Ignite's Java component will provide support for DML soon (IGNITE-2294). At 
> she same time DML will be supported at the level of ODBC and JDBC drivers.
> As the next step we should include the similar functionality into Ignite.NET 
> by doing the following:
> - Implement DML API;
> - Enhance {{QueryExample.cs}} by doing INSERTs instead of cache.puts and 
> adding UPDATE and DELETE operation examples.
> - Add documentation to Ignite.NET readme.io covering the feature. Most like 
> most of the content can be take from the general documentation when this 
> ticket IGNITE-4018 is ready 
> (https://apacheignite.readme.io/docs/distributed-dml).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-4045) .NET: Support DML API

2016-12-26 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778622#comment-15778622
 ] 

Pavel Tupitsyn edited comment on IGNITE-4045 at 12/26/16 5:30 PM:
--

3) There is already a ticket: IGNITE-4397. In current ticket FieldComparer is 
hidden from public API, please ignore it.
4) I agree. This is done intentionally because we are on a very hot path here 
and want to avoid any allocations or overhead associated with {{ReadByte}} and 
{{ReadByteArray}}. I've refactored this to a generic {{IBinaryStream.Apply}} 
method which allows operating on byte pointer directly without allocations.
5) IGNITE-4495


was (Author: ptupitsyn):
3) There is already a ticket: IGNITE-4397. In current ticket FieldComparer is 
hidden from public API, please ignore it.
5) IGNITE-4495

> .NET: Support DML API
> -
>
> Key: IGNITE-4045
> URL: https://issues.apache.org/jira/browse/IGNITE-4045
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Reporter: Denis Magda
>Assignee: Pavel Tupitsyn
>  Labels: roadmap
> Fix For: 2.0
>
>
> Ignite's Java component will provide support for DML soon (IGNITE-2294). At 
> she same time DML will be supported at the level of ODBC and JDBC drivers.
> As the next step we should include the similar functionality into Ignite.NET 
> by doing the following:
> - Implement DML API;
> - Enhance {{QueryExample.cs}} by doing INSERTs instead of cache.puts and 
> adding UPDATE and DELETE operation examples.
> - Add documentation to Ignite.NET readme.io covering the feature. Most like 
> most of the content can be take from the general documentation when this 
> ticket IGNITE-4018 is ready 
> (https://apacheignite.readme.io/docs/distributed-dml).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4045) .NET: Support DML API

2016-12-26 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778622#comment-15778622
 ] 

Pavel Tupitsyn commented on IGNITE-4045:


3) There is already a ticket: IGNITE-4397. In current ticket FieldComparer is 
hidden from public API, please ignore it.
5) IGNITE-4495

> .NET: Support DML API
> -
>
> Key: IGNITE-4045
> URL: https://issues.apache.org/jira/browse/IGNITE-4045
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Reporter: Denis Magda
>Assignee: Pavel Tupitsyn
>  Labels: roadmap
> Fix For: 2.0
>
>
> Ignite's Java component will provide support for DML soon (IGNITE-2294). At 
> she same time DML will be supported at the level of ODBC and JDBC drivers.
> As the next step we should include the similar functionality into Ignite.NET 
> by doing the following:
> - Implement DML API;
> - Enhance {{QueryExample.cs}} by doing INSERTs instead of cache.puts and 
> adding UPDATE and DELETE operation examples.
> - Add documentation to Ignite.NET readme.io covering the feature. Most like 
> most of the content can be take from the general documentation when this 
> ticket IGNITE-4018 is ready 
> (https://apacheignite.readme.io/docs/distributed-dml).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4368) Data Streamer closed exception in Kafka Streamer

2016-12-26 Thread Anil (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778611#comment-15778611
 ] 

Anil commented on IGNITE-4368:
--

i did couple of tests and noticed this exception in case of network 
segmentation which is stopping the cache (as STOP is default network 
segmentation policy). So i am not sure if this is an issue with Kafka streamer.



> Data Streamer closed exception in Kafka Streamer
> 
>
> Key: IGNITE-4368
> URL: https://issues.apache.org/jira/browse/IGNITE-4368
> Project: Ignite
>  Issue Type: Bug
>  Components: streaming
>Affects Versions: 1.7
>Reporter: Anil
>
> There is a possibility of data streamer closed exception in the following case
> 1. Lets says Kafka streamer is created when application is stared.
> 2. application node join the ignite cluster
> 3. Node disconnects from cluster 
> 4. Node joins the cluster after failure connect time and network timeout time.
> Data streamer closed exception happens between #3 and #4. still an issue post 
> #4 ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4495) .NET: Support user-defined identity resolvers

2016-12-26 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-4495:
--

 Summary: .NET: Support user-defined identity resolvers
 Key: IGNITE-4495
 URL: https://issues.apache.org/jira/browse/IGNITE-4495
 Project: Ignite
  Issue Type: New Feature
  Components: platforms
Affects Versions: 2.0
Reporter: Pavel Tupitsyn
 Fix For: 2.3


See {{BinaryTypeConfiguration.EqualityComparer}}: it supports predefined 
implementations only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-4494) .NET: Optimize ExamplesTest.TestRemoteNodes

2016-12-26 Thread Pavel Tupitsyn (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn resolved IGNITE-4494.

Resolution: Fixed
  Assignee: (was: Pavel Tupitsyn)

Node reuse implemented, additional validation for example descriptions added.

> .NET: Optimize ExamplesTest.TestRemoteNodes
> ---
>
> Key: IGNITE-4494
> URL: https://issues.apache.org/jira/browse/IGNITE-4494
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Affects Versions: 1.8
>Reporter: Pavel Tupitsyn
>  Labels: .NET, tests
> Fix For: 2.0
>
>
> Currently for each example test with remote node we do the following:
> * Start monitoring node
> * Start a standalone node
> * Stop monitoring node
> * Start example
> But since IGNITE-3427 is implemented, all examples have the same config. We 
> can start standalone node(s) once and run all examples, saving a lot of time 
> (currently example tests are the most time consuming).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4490) Optimize DML for fast INSERT and MERGE

2016-12-26 Thread Alexander Paschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778549#comment-15778549
 ] 

Alexander Paschenko commented on IGNITE-4490:
-

What has changed: when query contains no expressions, just constants and 
parameters, and insert is done in rows based manner, then such query is 
processed in a way similar to fast UPDATE and DELETE.

> Optimize DML for fast INSERT and MERGE
> --
>
> Key: IGNITE-4490
> URL: https://issues.apache.org/jira/browse/IGNITE-4490
> Project: Ignite
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 1.8
>Reporter: Alexander Paschenko
>Assignee: Vladimir Ozerov
> Fix For: 1.8
>
>
> It's possible to avoid any SQL querying and map some INSERT and MERGE 
> statements to cache operations in a way similar to that of UPDATE and DELETE 
> - i.e. don't make queries when there are no expressions to evaluate in the 
> query and enhance update plans to perform direct cache operations when INSERT 
> and MERGE affect columns {{_key}} and {{_val}} only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4490) Optimize DML for fast INSERT and MERGE

2016-12-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778543#comment-15778543
 ] 

ASF GitHub Bot commented on IGNITE-4490:


GitHub user alexpaschenko opened a pull request:

https://github.com/apache/ignite/pull/1387

IGNITE-4490 Query-less INSERT and MERGE



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-4490

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1387.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1387


commit d362da5fdb0671d17672b0096d064ce30254638f
Author: Alexander Paschenko 
Date:   2016-12-26T15:39:10Z

IGNITE-4490 Query-less INSERT and MERGE




> Optimize DML for fast INSERT and MERGE
> --
>
> Key: IGNITE-4490
> URL: https://issues.apache.org/jira/browse/IGNITE-4490
> Project: Ignite
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 1.8
>Reporter: Alexander Paschenko
>Assignee: Alexander Paschenko
> Fix For: 1.8
>
>
> It's possible to avoid any SQL querying and map some INSERT and MERGE 
> statements to cache operations in a way similar to that of UPDATE and DELETE 
> - i.e. don't make queries when there are no expressions to evaluate in the 
> query and enhance update plans to perform direct cache operations when INSERT 
> and MERGE affect columns {{_key}} and {{_val}} only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4372) Set up code coverage reports

2016-12-26 Thread Ksenia Rybakova (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778524#comment-15778524
 ] 

Ksenia Rybakova commented on IGNITE-4372:
-

- Since I was not able to make SonarQube work, plain JaCoCo is being tested. 
Currently there is a way to get coverage report for each module. Try to find a 
way to aggregate them into one report.

> Set up code coverage reports
> 
>
> Key: IGNITE-4372
> URL: https://issues.apache.org/jira/browse/IGNITE-4372
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ksenia Rybakova
>Assignee: Ksenia Rybakova
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-3625) IGFS: Use common naming for IGFS meta and data caches.

2016-12-26 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766958#comment-15766958
 ] 

Taras Ledkov edited comment on IGNITE-3625 at 12/26/16 3:35 PM:


In this case we cannot guarantee that IGFS uses the cache with standard name.
Also we should support both configuration: by cache name and by cache 
configuration.

Should the {{getDataCacheName}} and {{getMetaCacheName}} methods be marked as 
deprecated in this case?

Support of the {{getCacheName}} / {{setCacheName}} makes the solution is more 
complicated. In all places of code where we use cache names we must be check 
IGFS configuration: what is used cache name used or cache config.

What about mixed configurations: e.g. data cache is configured by name and meta 
by cache config?


was (Author: tledkov-gridgain):
In this case we cannot guarantee that IGFS uses the cache with standard name.
Also we should support both configuration: by cache name and by cache 
configuration.

Should the {{getDataCacheName}} and {{getMetaCacheName}} methods be marked as 
deprecated in this case?

Support of the {{getCacheName}} / {{setCacheName}} makes the solution is more 
complicated. In all places of code where we use cache names we must be check 
IGFS configuration: what is used cache name used or cache config.

What about mixed mixed configurations: e.g. data cache is configured by name 
and meta by cache config?

> IGFS: Use common naming for IGFS meta and data caches.
> --
>
> Key: IGNITE-3625
> URL: https://issues.apache.org/jira/browse/IGNITE-3625
> Project: Ignite
>  Issue Type: Task
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
> Fix For: 2.0
>
>
> Currently IGFS is configured by passing names of two caches: meta and data. 
> See {{FileSystemConfiguration.metaCacheName}} and 
> {{FileSystemConfiguration.dataCacheName}}.
> These two caches are considered internal then and are not accessible for the 
> user.
> We need to do the following during node startup:
> 1) If certain cache is configured as meta or data cache for multiple IGFS-es, 
> or if it is configured as both meta and data cache for a single IGFS, then 
> throw an exception.
> Relevant code pieces:
> {{IgfsProcessor.validateLocalIgfsConfigurations}}
> {{IgfsProcessorSelfTest}}.
> 2) During node startup change the name of this cache to 
> {{igfs-IGFS_NAME-meta}} or {{igfs-IGFS_NAME-data}}. Change must be performed 
> both inside IGFS config and cache config.
> Relevant code pieces:
> {{CacheConfiguration.name}}
> {{FileSystemConfiguration.metaCacheName}}
> {{FileSystemConfiguration.dataCacheName}}
> {{IgfsUtils.prepareCacheConfiguration}} - where change will be done.
> 3) If any of new names caches with any other cache name, an exception should 
> be thrown. The most simple way: throw an exception if user-configured cache 
> name starts with {{igfs-}} and ends with {{-meta}} or {{-data}}.
> Relevant code pieces:
> {{IgniteNamedInstance.initializeDefaultCacheConfiguration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4363) Inner properties mutation broken in SQL UPDATE

2016-12-26 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778493#comment-15778493
 ] 

Vladimir Ozerov commented on IGNITE-4363:
-

Alex, I re-merged the ticket with master. Please confirm status of TC.

> Inner properties mutation broken in SQL UPDATE
> --
>
> Key: IGNITE-4363
> URL: https://issues.apache.org/jira/browse/IGNITE-4363
> Project: Ignite
>  Issue Type: Bug
>  Components: binary, SQL
>Affects Versions: 1.8
>Reporter: Alexander Paschenko
>Assignee: Vladimir Ozerov
> Fix For: 2.0
>
>
> Discovered in course of working on IGNITE-4340.
> Say, we have following type for cache values
> {code:java}
> static final class AllTypes implements Serializable {
> /**
>  * Data Long.
>  */
> @QuerySqlField
> Long longCol;
> /**
>  * Inner type object.
>  */
> @QuerySqlField
> InnerType innerTypeCol;
> /** */
> static final class InnerType implements Serializable {
> /** */
> @QuerySqlField
> Long innerLongCol;
> /** */
> @QuerySqlField
> String innerStrCol;
>}
> }
> {code}
> Queries like this fail for both optimized and binary marshaller:
> {code:sql}
> UPDATE AllTypes set innerLongCol = ?
> {code}
> For optimized, current DML implementation mutates existing inner property 
> thus confusing DML statements re-run logic (query re-runs because engine sees 
> value as concurrently modified, though the only change is its own, and 
> ultimately fails). The solution is to clone inner objects and set new 
> property values on them because we need to have old value pristine.
> For binary, current DML implementation does not honor properties hierarchy 
> and, for above example, just sets {{innerLongCol}} field on {{AllTypes}} 
> binary object and not its child {{InnerType}} object. Thus, index will be 
> updated and SELECTs for that column will return correct value for that field, 
> but inner state of target property {{innerTypeCol}} does not change, and 
> {{AllTypes}} object gets its own odd field {{innerLongCol}} which it does not 
> know how to do with (no metadata about it in type descriptor).
> The patch for both problems is ready and lying on my shelf waiting for 1.8 
> release to happen to be applied. Then this will ultimately be fixed with the 
> rest of known problems/improvements (Jira issues for them will follow).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4363) Inner properties mutation broken in SQL UPDATE

2016-12-26 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4363:

Fix Version/s: (was: 1.8)
   2.0

> Inner properties mutation broken in SQL UPDATE
> --
>
> Key: IGNITE-4363
> URL: https://issues.apache.org/jira/browse/IGNITE-4363
> Project: Ignite
>  Issue Type: Bug
>  Components: binary, SQL
>Affects Versions: 1.8
>Reporter: Alexander Paschenko
>Assignee: Vladimir Ozerov
> Fix For: 2.0
>
>
> Discovered in course of working on IGNITE-4340.
> Say, we have following type for cache values
> {code:java}
> static final class AllTypes implements Serializable {
> /**
>  * Data Long.
>  */
> @QuerySqlField
> Long longCol;
> /**
>  * Inner type object.
>  */
> @QuerySqlField
> InnerType innerTypeCol;
> /** */
> static final class InnerType implements Serializable {
> /** */
> @QuerySqlField
> Long innerLongCol;
> /** */
> @QuerySqlField
> String innerStrCol;
>}
> }
> {code}
> Queries like this fail for both optimized and binary marshaller:
> {code:sql}
> UPDATE AllTypes set innerLongCol = ?
> {code}
> For optimized, current DML implementation mutates existing inner property 
> thus confusing DML statements re-run logic (query re-runs because engine sees 
> value as concurrently modified, though the only change is its own, and 
> ultimately fails). The solution is to clone inner objects and set new 
> property values on them because we need to have old value pristine.
> For binary, current DML implementation does not honor properties hierarchy 
> and, for above example, just sets {{innerLongCol}} field on {{AllTypes}} 
> binary object and not its child {{InnerType}} object. Thus, index will be 
> updated and SELECTs for that column will return correct value for that field, 
> but inner state of target property {{innerTypeCol}} does not change, and 
> {{AllTypes}} object gets its own odd field {{innerLongCol}} which it does not 
> know how to do with (no metadata about it in type descriptor).
> The patch for both problems is ready and lying on my shelf waiting for 1.8 
> release to happen to be applied. Then this will ultimately be fixed with the 
> rest of known problems/improvements (Jira issues for them will follow).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4106) SQL: parallelize sql queries over cache local partitions

2016-12-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778467#comment-15778467
 ] 

ASF GitHub Bot commented on IGNITE-4106:


GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/1386

IGNITE-4106: SQL: parallelize sql queries over cache local partitions

Tree index splitted into segmentes in GridH2StripedTreeIndex
Tests added

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-4106

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1386.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1386


commit d013500bd897505ec6cbfb2f25d54f1f938f2f83
Author: AMRepo 
Date:   2016-12-26T14:54:01Z

Tree index splitted into segmentes in GridH2StripedTreeIndex

Tests added

commit 3202889b6005eaad77026e4682e7b6e8fb336ba5
Author: AMRepo 
Date:   2016-12-26T15:06:11Z

Tree index splitted into segmentes in GridH2StripedTreeIndex
Tests added




> SQL: parallelize sql queries over cache local partitions
> 
>
> Key: IGNITE-4106
> URL: https://issues.apache.org/jira/browse/IGNITE-4106
> Project: Ignite
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 1.6, 1.7
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>  Labels: performance, sql
> Fix For: 2.0
>
> Attachments: 1node-4thread.jfr, 4node-1thread.jfr
>
>
> If we run SQL query on cache partitioned over several cluster nodes, it will 
> be split into several queries running in parallel. But really we will have 
> one thread per query on each node.
> So, for now, to improve SQL query performance we need to run more Ignite 
> instances or split caches manually.
> It seems to be better to split local SQL queries over cache partitions, so we 
> would be able to parallelize SQL query on every single node and utilize CPU 
> more efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4106) SQL: parallelize sql queries over cache local partitions

2016-12-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778462#comment-15778462
 ] 

ASF GitHub Bot commented on IGNITE-4106:


Github user AMashenkov closed the pull request at:

https://github.com/apache/ignite/pull/1385


> SQL: parallelize sql queries over cache local partitions
> 
>
> Key: IGNITE-4106
> URL: https://issues.apache.org/jira/browse/IGNITE-4106
> Project: Ignite
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 1.6, 1.7
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>  Labels: performance, sql
> Fix For: 2.0
>
> Attachments: 1node-4thread.jfr, 4node-1thread.jfr
>
>
> If we run SQL query on cache partitioned over several cluster nodes, it will 
> be split into several queries running in parallel. But really we will have 
> one thread per query on each node.
> So, for now, to improve SQL query performance we need to run more Ignite 
> instances or split caches manually.
> It seems to be better to split local SQL queries over cache partitions, so we 
> would be able to parallelize SQL query on every single node and utilize CPU 
> more efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4016) ODBC: Enhance ODBC example by using DML statements

2016-12-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778453#comment-15778453
 ] 

ASF GitHub Bot commented on IGNITE-4016:


GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/1385

IGNITE-4016: SQL: parallelize sql queries over cache local partitions

Tree index splitted into segmentes in GridH2StripedTreeIndex
Tests added

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-4106

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1385.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1385


commit d013500bd897505ec6cbfb2f25d54f1f938f2f83
Author: AMRepo 
Date:   2016-12-26T14:54:01Z

Tree index splitted into segmentes in GridH2StripedTreeIndex

Tests added




> ODBC: Enhance ODBC example by using DML statements
> --
>
> Key: IGNITE-4016
> URL: https://issues.apache.org/jira/browse/IGNITE-4016
> Project: Ignite
>  Issue Type: Task
>  Components: odbc
>Reporter: Denis Magda
>Assignee: Vladimir Ozerov
> Fix For: 1.8
>
>
> In Apache Ignite 1.8 the community is planning to release DML support. 
> To adopt DML usage we need to improve existed or add additional examples.
> On ODBC side we can improve existed odbc_example by implementing the 
> following scenario:
> - fill up a cache using INSERT commands instead of cache.puts().
> - execute existed SELECT statements.
> - perform cache update using UPDATE statements.
> - execute SELECT statements showing that the data has been changed.
> - remove a part of the data from cache using DELETE command.
> - execute SELECTs again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4106) SQL: parallelize sql queries over cache local partitions

2016-12-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778443#comment-15778443
 ] 

ASF GitHub Bot commented on IGNITE-4106:


Github user AMashenkov closed the pull request at:

https://github.com/apache/ignite/pull/1198


> SQL: parallelize sql queries over cache local partitions
> 
>
> Key: IGNITE-4106
> URL: https://issues.apache.org/jira/browse/IGNITE-4106
> Project: Ignite
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 1.6, 1.7
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>  Labels: performance, sql
> Fix For: 2.0
>
> Attachments: 1node-4thread.jfr, 4node-1thread.jfr
>
>
> If we run SQL query on cache partitioned over several cluster nodes, it will 
> be split into several queries running in parallel. But really we will have 
> one thread per query on each node.
> So, for now, to improve SQL query performance we need to run more Ignite 
> instances or split caches manually.
> It seems to be better to split local SQL queries over cache partitions, so we 
> would be able to parallelize SQL query on every single node and utilize CPU 
> more efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-1943) ignitevisorcmd: wrong behavior for "mclear" command

2016-12-26 Thread Saikat Maitra (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-1943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778427#comment-15778427
 ] 

Saikat Maitra commented on IGNITE-1943:
---

[~anovikov]

Thank you Andrey for reviewing the PR.

1. I didn't remove mclear command as part of this PR as I thought it may be 
getting used actively during testing and development. May be we can remove it 
as part of separate jira ticket or shall I remove it as part of this PR?
2. I can see the missing (@nX) variable printed in node command as the issue. I 
will go ahead and change for host, cache and alert variables. Can you share few 
examples where these are used?
3. Yes, the mcompact command will change the order. I have used the 
setVarIfAbsent() method but with that the last node only get new (n0) value and 
other nodes values are empty.Also clearNamespace() allows us to easily reclaim 
the unused memory otherwise we have to use some kind of defensive copy 
mechanism to copy the only current active values and discard the old unused 
values.

Regards,
Saikat
 

> ignitevisorcmd: wrong behavior for "mclear" command
> ---
>
> Key: IGNITE-1943
> URL: https://issues.apache.org/jira/browse/IGNITE-1943
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 1.5.0.final
>Reporter: Vasilisa  Sidorova
>Assignee: Saikat Maitra
> Fix For: 2.0
>
>
> -
> DESCRIPTION
> -
> "mclear" should clears all Visor console variables before they will be 
> updated during next run of the corresponding command. It OK for all type of 
> shortcut (@cX, @tX, @aX, @eX) exept shortcut for node-id variables. After 
> "mclear" @nX are disappear from any statistics 
> -
> STEPS FOR REPRODUCE
> -
> # Start cluster
> # Start visorcmd ([IGNITE_HOME]/bin/ignitevisorcmd.sh)
> # Connect to the started cluster (visor> open)
> # Execute any tasks in it (for example, run any example from EventsExample, 
> DeploymentExample, ComputeContinuousMapperExample, ComputeTaskMapExample, 
> ComputeTaskSplitExample) 
> # Create several alerts (visor> alert)
> # Run visor "mclear" command
> # Run visor "node" command
> -
> ACTUAL RESULT
> -
> There isn't @nX values in the first table's column:
> {noformat}
> visor> node
> Select node from:
> +==+
> | # |   Node ID8(@), IP   | Up Time  | CPUs | CPU Load | Free Heap |
> +==+
> | 0 | 6B3E4033, 127.0.0.1 | 00:38:41 | 8| 0.20 %   | 97.00 %   |
> | 1 | 0A0D6989, 127.0.0.1 | 00:38:22 | 8| 0.17 %   | 97.00 %   |
> | 2 | 0941E33F, 127.0.0.1 | 00:35:46 | 8| 0.23 %   | 97.00 %   |
> +--+
> {noformat}
> -
> EXPECTED RESULT
> -
> There is @nX values in the first table's column:
> {noformat}
> visor> node
> Select node from:
> +===+
> | # | Node ID8(@), IP  | Up Time  | CPUs | CPU Load | Free Heap |
> +===+
> | 0 | 6B3E4033(@n0), 127.0.0.1 | 00:38:13 | 8| 0.23 %   | 97.00 %   |
> | 1 | 0A0D6989(@n1), 127.0.0.1 | 00:37:54 | 8| 0.27 %   | 97.00 %   |
> | 2 | 0941E33F(@n2), 127.0.0.1 | 00:35:18 | 8| 0.20 %   | 97.00 %   |
> +---+
> {noformat}
> "mclear" should get effect only for "mget @nX" command (with result "Missing 
> variable with name: '@nX'") and @nX variables  should get new values after 
> "node" (or "tasks") command execution. Like in case for all others @ 
> variables (@cX, @tX, @aX, @eX) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-4367) .NET: Fix flaky tests

2016-12-26 Thread Pavel Tupitsyn (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn resolved IGNITE-4367.

Resolution: Fixed
  Assignee: (was: Pavel Tupitsyn)

> .NET: Fix flaky tests
> -
>
> Key: IGNITE-4367
> URL: https://issues.apache.org/jira/browse/IGNITE-4367
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Priority: Trivial
>  Labels: .NET
> Fix For: 1.9
>
>
> TeamCity has detected a number of flaky tests in .NET: 
> http://ci.ignite.apache.org/project.html?projectId=IgniteTests=flakyTests=IgniteTests_IgnitePlatformNetCoverage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4367) .NET: Fix flaky tests

2016-12-26 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778425#comment-15778425
 ] 

Pavel Tupitsyn commented on IGNITE-4367:


TestClusterRestart fixed; IGNITE-4494 created to address examples test issue.

> .NET: Fix flaky tests
> -
>
> Key: IGNITE-4367
> URL: https://issues.apache.org/jira/browse/IGNITE-4367
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Trivial
>  Labels: .NET
> Fix For: 1.9
>
>
> TeamCity has detected a number of flaky tests in .NET: 
> http://ci.ignite.apache.org/project.html?projectId=IgniteTests=flakyTests=IgniteTests_IgnitePlatformNetCoverage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4494) .NET: Optimize ExamplesTest.TestRemoteNodes

2016-12-26 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-4494:
--

 Summary: .NET: Optimize ExamplesTest.TestRemoteNodes
 Key: IGNITE-4494
 URL: https://issues.apache.org/jira/browse/IGNITE-4494
 Project: Ignite
  Issue Type: Improvement
  Components: platforms
Affects Versions: 1.8
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 2.0


Currently for each example test with remote node we do the following:
* Start monitoring node
* Start a standalone node
* Stop monitoring node
* Start example

But since IGNITE-3427 is implemented, all examples have the same config. We can 
start standalone node(s) once and run all examples, saving a lot of time 
(currently example tests are the most time consuming).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4493) ODBC: Add missing diagnostic records in case of API errors

2016-12-26 Thread Sergey Kalashnikov (JIRA)
Sergey Kalashnikov created IGNITE-4493:
--

 Summary: ODBC: Add missing diagnostic records in case of API errors
 Key: IGNITE-4493
 URL: https://issues.apache.org/jira/browse/IGNITE-4493
 Project: Ignite
  Issue Type: Bug
  Components: odbc
Affects Versions: 1.8
Reporter: Sergey Kalashnikov
Assignee: Sergey Kalashnikov
 Fix For: 2.0


The following functions in Ignite ODBC driver do not strictly follow the API 
documentation with regard to diagnostics. It should be possible to obtain 
additional information about errors via SQLGetDiagRec() call.

SQLAllocHandle
SQLFreeHandle
SQLFreeStmt
SQLBindCol
SQLFetchScroll
SQLBindParameter




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4492) Add MBean for StripedExecutor

2016-12-26 Thread Yakov Zhdanov (JIRA)
Yakov Zhdanov created IGNITE-4492:
-

 Summary: Add MBean for StripedExecutor
 Key: IGNITE-4492
 URL: https://issues.apache.org/jira/browse/IGNITE-4492
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 1.9
Reporter: Yakov Zhdanov
Priority: Minor
 Fix For: 2.0


# Add MBean interface and implementation as we have for 
org.apache.ignite.internal.ThreadPoolMXBeanAdapter
# Register MBean together with other pools MBeans
# Unregister MBean on Ignite stop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4367) .NET: Fix flaky tests

2016-12-26 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778327#comment-15778327
 ] 

Pavel Tupitsyn commented on IGNITE-4367:


Reopened, current test issues:
* Possible hang in examples tests
* TestClusterRestart is flaky


> .NET: Fix flaky tests
> -
>
> Key: IGNITE-4367
> URL: https://issues.apache.org/jira/browse/IGNITE-4367
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Trivial
>  Labels: .NET
> Fix For: 1.9
>
>
> TeamCity has detected a number of flaky tests in .NET: 
> http://ci.ignite.apache.org/project.html?projectId=IgniteTests=flakyTests=IgniteTests_IgnitePlatformNetCoverage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-3758) ODBC: Add tests for SQL_SQL92_DATETIME_FUNCTIONS.

2016-12-26 Thread Igor Sapego (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego resolved IGNITE-3758.
-
Resolution: Fixed

Resolved by IGNITE-3750.

> ODBC: Add tests for SQL_SQL92_DATETIME_FUNCTIONS.
> -
>
> Key: IGNITE-3758
> URL: https://issues.apache.org/jira/browse/IGNITE-3758
> Project: Ignite
>  Issue Type: Task
>  Components: odbc
>Affects Versions: 1.7
>Reporter: Vladimir Ozerov
>Assignee: Igor Sapego
>Priority: Minor
> Fix For: 2.0
>
>
> Let's add tests for SQL_SQL92_DATETIME_FUNCTIONS [1]
> Affected functions:
> SQL_SDF_CURRENT_DATE
> SQL_SDF_CURRENT_TIME
> SQL_SDF_CURRENT_TIMESTAMP
> [1] https://msdn.microsoft.com/en-us/library/ms711681(v=vs.85).aspx#Anchor_16



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (IGNITE-4367) .NET: Fix flaky tests

2016-12-26 Thread Pavel Tupitsyn (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn reopened IGNITE-4367:

  Assignee: Pavel Tupitsyn

> .NET: Fix flaky tests
> -
>
> Key: IGNITE-4367
> URL: https://issues.apache.org/jira/browse/IGNITE-4367
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Trivial
>  Labels: .NET
> Fix For: 1.9
>
>
> TeamCity has detected a number of flaky tests in .NET: 
> http://ci.ignite.apache.org/project.html?projectId=IgniteTests=flakyTests=IgniteTests_IgnitePlatformNetCoverage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4441) Define plugin API in .NET

2016-12-26 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778305#comment-15778305
 ] 

Pavel Tupitsyn commented on IGNITE-4441:


1) Fixed.
2) Added sample.
3) Refactored to PlatformProcessor.startPlugins() which is called from kernal 
right after Java plugins.

> Define plugin API in .NET
> -
>
> Key: IGNITE-4441
> URL: https://issues.apache.org/jira/browse/IGNITE-4441
> Project: Ignite
>  Issue Type: Sub-task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 2.0
>
>
> Define plugin API in .NET similar to Java API:
> * {{IgniteConfiguration.PluginConfigurations}}
> * {{IPluginProvider}}
> * {{IPluginContext}}
> Should work like this:
> * Plugin author implements {{IPluginProvider}}
> * We discover plugins on Ignite start by examining all DLL files in the 
> folder, load DLLs where {{IPluginProvider}} implementations are present, 
> instantiate these implementations, and call 
> {{IPluginProvider.Start(IPluginContext)}} method.
> * Plugin user can retrieve plugin via {{IIgnite.GetPlugin(string name)}}, 
> or via helper extension method provided by plugin author.
> This task does not include the possibility to interact with Java from the 
> plugin code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4470) ODBC: Add support for logger configuration

2016-12-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778274#comment-15778274
 ] 

ASF GitHub Bot commented on IGNITE-4470:


GitHub user skalashnikov opened a pull request:

https://github.com/apache/ignite/pull/1384

IGNITE-4470 Added support for log file configuration via environmental 
variable IGNITE_ODBC_LOG_PATH



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-4470

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1384.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1384


commit 16d133630e0f2089e29c9b6df5a37e7bf73b90ab
Author: skalashnikov 
Date:   2016-12-22T10:11:37Z

IGNITE-4470 added support for log file configuration via environment 
variable IGNITE_ODBC_LOG_PATH

commit 7a0945d6f3a2b553eed5fd42e4fb8e1abb8382f2
Author: skalashnikov 
Date:   2016-12-26T12:27:55Z

Merge branch 'master' of https://github.com/apache/ignite into ignite-4470




> ODBC: Add support for logger configuration
> --
>
> Key: IGNITE-4470
> URL: https://issues.apache.org/jira/browse/IGNITE-4470
> Project: Ignite
>  Issue Type: Improvement
>  Components: odbc
>Affects Versions: 1.8
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
> Fix For: 2.0
>
>
> Users of ODBC driver should be able to configure logging without driver 
> recompilation.
> Logger should be setup via environment variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4476) GridIoManger must always process messages asynchronously

2016-12-26 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778265#comment-15778265
 ] 

Vladimir Ozerov commented on IGNITE-4476:
-

Let's benchmark it. I prefer to have correct semantics and deadlock-safety 
first, and performance second.

> GridIoManger must always process messages asynchronously
> 
>
> Key: IGNITE-4476
> URL: https://issues.apache.org/jira/browse/IGNITE-4476
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
> Fix For: 2.0
>
>
> *Problem*
> If message is to be sent to remote node, we just send it (surprise :-)). But 
> if message is to be sent to local node, we have a strange "optimization" - we 
> process it synchronously in the same thread. 
> This is wrong as it easily leads to all kind of weird starvations and 
> deadlocks.
> *Solution*
> Never ever process messages synchronously. For local node we should put 
> message runnable into appropriate thread pool and exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4479) Remove startTime() and duration() methods from IgniteFuture

2016-12-26 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778261#comment-15778261
 ] 

Vladimir Ozerov commented on IGNITE-4479:
-

It is used only in one place for several cache futures. Let's leave it only 
where it is really needed.

> Remove startTime() and duration() methods from IgniteFuture
> ---
>
> Key: IGNITE-4479
> URL: https://issues.apache.org/jira/browse/IGNITE-4479
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
> Fix For: 2.0
>
>
> These methods are of no use for users and should not be in future API. THe 
> only place where they are used is cache exchange manager. 
> We should remove these methods form all futures and leave them only for cache 
> exchange logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (IGNITE-4480) Incorrect cancellation semantics in IgniteFuture

2016-12-26 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4480:

Comment: was deleted

(was: Right, but {{cancel()}} should always lead to {{CancelledException}} from 
user perspective always. You may or may not perform some special actions in 
response to cancel, but from the outside behavior must be consistent.
This topic was discussed on concurrency-interest multiple times. And bottom 
line is that correct design is when execution is decoupled from cancellation 
handling. This is what I essentially propose.)

> Incorrect cancellation semantics in IgniteFuture
> 
>
> Key: IGNITE-4480
> URL: https://issues.apache.org/jira/browse/IGNITE-4480
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
> Fix For: 2.0
>
>
> *Problem*
> Normally, if user invoke "cancel()" on future, he expects that it will be 
> completed immediately. E.g. this is how JDK {{CompletableFuture}} works. 
> However, this is not the case for Ignite - we inform user about cancellation 
> through flag, which is also not set properly in most cases.
> *Solution*
> 1) {{cancel()}} must complete future with special "CancellationException" 
> immediately.
> 2) {{isCancelled()}} method should be either removed or deprecated.
> 3{ We should add {{isCompletedExceptionally()}} method which will return 
> {{true}} in case future was completed with any kind of exception. This will 
> work similar to JDK {{CompletableFuture}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4480) Incorrect cancellation semantics in IgniteFuture

2016-12-26 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778258#comment-15778258
 ] 

Vladimir Ozerov commented on IGNITE-4480:
-

Right, but {{cancel()}} should always lead to {{CancelledException}} from user 
perspective always. You may or may not perform some special actions in response 
to cancel, but from the outside behavior must be consistent.
This topic was discussed on concurrency-interest multiple times. And bottom 
line is that correct design is when execution is decoupled from cancellation 
handling. This is what I essentially propose.

> Incorrect cancellation semantics in IgniteFuture
> 
>
> Key: IGNITE-4480
> URL: https://issues.apache.org/jira/browse/IGNITE-4480
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
> Fix For: 2.0
>
>
> *Problem*
> Normally, if user invoke "cancel()" on future, he expects that it will be 
> completed immediately. E.g. this is how JDK {{CompletableFuture}} works. 
> However, this is not the case for Ignite - we inform user about cancellation 
> through flag, which is also not set properly in most cases.
> *Solution*
> 1) {{cancel()}} must complete future with special "CancellationException" 
> immediately.
> 2) {{isCancelled()}} method should be either removed or deprecated.
> 3{ We should add {{isCompletedExceptionally()}} method which will return 
> {{true}} in case future was completed with any kind of exception. This will 
> work similar to JDK {{CompletableFuture}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4480) Incorrect cancellation semantics in IgniteFuture

2016-12-26 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778259#comment-15778259
 ] 

Vladimir Ozerov commented on IGNITE-4480:
-

Right, but {{cancel()}} should always lead to {{CancelledException}} from user 
perspective always. You may or may not perform some special actions in response 
to cancel, but from the outside behavior must be consistent.
This topic was discussed on concurrency-interest multiple times. And bottom 
line is that correct design is when execution is decoupled from cancellation 
handling. This is what I essentially propose.

> Incorrect cancellation semantics in IgniteFuture
> 
>
> Key: IGNITE-4480
> URL: https://issues.apache.org/jira/browse/IGNITE-4480
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
> Fix For: 2.0
>
>
> *Problem*
> Normally, if user invoke "cancel()" on future, he expects that it will be 
> completed immediately. E.g. this is how JDK {{CompletableFuture}} works. 
> However, this is not the case for Ignite - we inform user about cancellation 
> through flag, which is also not set properly in most cases.
> *Solution*
> 1) {{cancel()}} must complete future with special "CancellationException" 
> immediately.
> 2) {{isCancelled()}} method should be either removed or deprecated.
> 3{ We should add {{isCompletedExceptionally()}} method which will return 
> {{true}} in case future was completed with any kind of exception. This will 
> work similar to JDK {{CompletableFuture}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-4490) Optimize DML for fast INSERT and MERGE

2016-12-26 Thread Alexander Paschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15773190#comment-15773190
 ] 

Alexander Paschenko edited comment on IGNITE-4490 at 12/26/16 12:32 PM:


Implemented analysis to determine if statement may be run without involving SQL 
at all and if key and value are given explicitly in the query; moving further


was (Author: al.psc):
Implemented analysis to determine if stetement may be run without involving SQL 
at all and if key and value are given explicitly in the query; moving further

> Optimize DML for fast INSERT and MERGE
> --
>
> Key: IGNITE-4490
> URL: https://issues.apache.org/jira/browse/IGNITE-4490
> Project: Ignite
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 1.8
>Reporter: Alexander Paschenko
>Assignee: Alexander Paschenko
> Fix For: 1.8
>
>
> It's possible to avoid any SQL querying and map some INSERT and MERGE 
> statements to cache operations in a way similar to that of UPDATE and DELETE 
> - i.e. don't make queries when there are no expressions to evaluate in the 
> query and enhance update plans to perform direct cache operations when INSERT 
> and MERGE affect columns {{_key}} and {{_val}} only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4470) ODBC: Add support for logger configuration

2016-12-26 Thread Sergey Kalashnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kalashnikov updated IGNITE-4470:
---
Description: 
Users of ODBC driver should be able to configure logging without driver 
recompilation.
Logger should be setup via environment variable.

  was:
Users of ODBC driver should be able to configure logging without driver 
recompilation.
Logger should be setup via connection string parameter.


> ODBC: Add support for logger configuration
> --
>
> Key: IGNITE-4470
> URL: https://issues.apache.org/jira/browse/IGNITE-4470
> Project: Ignite
>  Issue Type: Improvement
>  Components: odbc
>Affects Versions: 1.8
>Reporter: Sergey Kalashnikov
>Assignee: Sergey Kalashnikov
> Fix For: 2.0
>
>
> Users of ODBC driver should be able to configure logging without driver 
> recompilation.
> Logger should be setup via environment variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4479) Remove startTime() and duration() methods from IgniteFuture

2016-12-26 Thread Yakov Zhdanov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778244#comment-15778244
 ] 

Yakov Zhdanov commented on IGNITE-4479:
---

I don't think we should do this. For now it provides the unified mechanism to 
determine how long some operation takes. This is used to determine whether we 
have any hangs or deadlocks in cache.

> Remove startTime() and duration() methods from IgniteFuture
> ---
>
> Key: IGNITE-4479
> URL: https://issues.apache.org/jira/browse/IGNITE-4479
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
> Fix For: 2.0
>
>
> These methods are of no use for users and should not be in future API. THe 
> only place where they are used is cache exchange manager. 
> We should remove these methods form all futures and leave them only for cache 
> exchange logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4476) GridIoManger must always process messages asynchronously

2016-12-26 Thread Yakov Zhdanov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778243#comment-15778243
 ] 

Yakov Zhdanov commented on IGNITE-4476:
---

I would say this is the plugin problem. 

We are working now to remove all the rwLocks so your example soon will not be 
valid :)

I am afraid if we implement your suggestion we will have local cache 
modifications slowed down greatly.

> GridIoManger must always process messages asynchronously
> 
>
> Key: IGNITE-4476
> URL: https://issues.apache.org/jira/browse/IGNITE-4476
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
> Fix For: 2.0
>
>
> *Problem*
> If message is to be sent to remote node, we just send it (surprise :-)). But 
> if message is to be sent to local node, we have a strange "optimization" - we 
> process it synchronously in the same thread. 
> This is wrong as it easily leads to all kind of weird starvations and 
> deadlocks.
> *Solution*
> Never ever process messages synchronously. For local node we should put 
> message runnable into appropriate thread pool and exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4480) Incorrect cancellation semantics in IgniteFuture

2016-12-26 Thread Yakov Zhdanov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778236#comment-15778236
 ] 

Yakov Zhdanov commented on IGNITE-4480:
---

Because cancellation logic is operation specific and developer should override 
and extend it in any case which is the case for the operations that can be 
cancelled.

> Incorrect cancellation semantics in IgniteFuture
> 
>
> Key: IGNITE-4480
> URL: https://issues.apache.org/jira/browse/IGNITE-4480
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
> Fix For: 2.0
>
>
> *Problem*
> Normally, if user invoke "cancel()" on future, he expects that it will be 
> completed immediately. E.g. this is how JDK {{CompletableFuture}} works. 
> However, this is not the case for Ignite - we inform user about cancellation 
> through flag, which is also not set properly in most cases.
> *Solution*
> 1) {{cancel()}} must complete future with special "CancellationException" 
> immediately.
> 2) {{isCancelled()}} method should be either removed or deprecated.
> 3{ We should add {{isCompletedExceptionally()}} method which will return 
> {{true}} in case future was completed with any kind of exception. This will 
> work similar to JDK {{CompletableFuture}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4486) Add serialVersionUID to AttributeNodeFilter

2016-12-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778218#comment-15778218
 ] 

ASF GitHub Bot commented on IGNITE-4486:


Github user alexpaschenko closed the pull request at:

https://github.com/apache/ignite/pull/1380


> Add serialVersionUID to AttributeNodeFilter
> ---
>
> Key: IGNITE-4486
> URL: https://issues.apache.org/jira/browse/IGNITE-4486
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 1.8
>Reporter: Alexander Paschenko
>Assignee: Alexander Paschenko
> Fix For: 1.8
>
>
> Subj. - in particular, without it, TC build fails all at once



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-4486) Add serialVersionUID to AttributeNodeFilter

2016-12-26 Thread Alexander Paschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Paschenko resolved IGNITE-4486.
-
Resolution: Fixed

Fixed in master by Vlad, closing this.

> Add serialVersionUID to AttributeNodeFilter
> ---
>
> Key: IGNITE-4486
> URL: https://issues.apache.org/jira/browse/IGNITE-4486
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 1.8
>Reporter: Alexander Paschenko
>Assignee: Alexander Paschenko
> Fix For: 1.8
>
>
> Subj. - in particular, without it, TC build fails all at once



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3875) Create separate thread pool for data streamer.

2016-12-26 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778196#comment-15778196
 ] 

Vladimir Ozerov commented on IGNITE-3875:
-

Merged with {{ignite-2.0}}, reviewed, performed minor fixes and 
simplifications. Now waiting for tests results. New PR is #1383.

> Create separate thread pool for data streamer.
> --
>
> Key: IGNITE-3875
> URL: https://issues.apache.org/jira/browse/IGNITE-3875
> Project: Ignite
>  Issue Type: Task
>  Components: streaming
>Affects Versions: 1.7
>Reporter: Vladimir Ozerov
>Assignee: Vladimir Ozerov
> Fix For: 2.0
>
>
> Currently data streamer requests are submitted to PUBLIC pool. Because of 
> this it is not safe to run streamer from compute tasks which is very 
> inconvenient.
> We should create separate thread pool for streamer and route streamer jobs to 
> it. Compatibility must be preserved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3875) Create separate thread pool for data streamer.

2016-12-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778193#comment-15778193
 ] 

ASF GitHub Bot commented on IGNITE-3875:


GitHub user devozerov opened a pull request:

https://github.com/apache/ignite/pull/1383

IGNITE-3875



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-3875

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1383.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1383


commit 9b25a643ac97b2a832179d669397d552d2a3
Author: AMRepo 
Date:   2016-10-19T16:05:19Z

Implemented

commit 0862db26623db5bde0c744f4b3536a10fd7b3e52
Author: AMRepo 
Date:   2016-10-19T16:05:19Z

Implemented

commit f9304121945f07b3785d07518e435bc964233f2f
Author: AMRepo 
Date:   2016-10-19T18:46:33Z

Minors

commit c17ff0e716cdbaf759e0f34bbbf7834b87198d0a
Author: AMRepo 
Date:   2016-10-19T18:55:50Z

Merge remote-tracking branch 'origin/ignite-3875' into ignite-3875

# Conflicts:
#   
modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessor.java

commit c8f78c3e4a69fb001c8995bb1f9c204da7415403
Author: Andrey V. Mashenkov 
Date:   2016-10-20T10:12:21Z

Merge branch 'ignite-1.6.11' into ignite-3875

commit 380a0b8a01a2ae4c606dbf67b588896ca7be435b
Author: vozerov-gridgain 
Date:   2016-10-20T11:58:55Z

Fixed receiver serialization problem.

commit 0f927608fe7b10f6c79b24b87b5eeecd46f68717
Author: vozerov-gridgain 
Date:   2016-10-20T11:59:15Z

Merge remote-tracking branch 'upstream/ignite-3875' into ignite-3875

commit 448d6f594dfb484aa471c186fec80bb0e54949d6
Author: Andrey V. Mashenkov 
Date:   2016-10-20T12:05:37Z

Fixed receiver serialization problem.

commit c893da70a9757b16b0799adc8eaa29fa1b03d06e
Author: tledkov-gridgain 
Date:   2016-12-21T11:54:33Z

IGNITE-4399: IGFS: Merged IgfsSecondaryFileSystem and 
IgfsSecondaryFileSystemV2 interfaces. This closes #1346.

commit c5882a85f4e3a1f61723ac54fd92f087684df6da
Author: devozerov 
Date:   2016-12-26T11:15:42Z

Merge branch 'master' into ignite-2.0

commit 248eaa710b11982831f71ed6733d125a5bae6871
Author: devozerov 
Date:   2016-12-26T11:21:50Z

Merge branch 'ignite-2.0' into ignite-3875

# Conflicts:
#   modules/core/src/main/java/org/apache/ignite/internal/IgnitionEx.java
#   
modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoManager.java
#   
modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoPolicy.java
#   
modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamProcessor.java
#   
modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerImpl.java
#   
modules/core/src/test/java/org/apache/ignite/testframework/junits/GridTestKernalContext.java

commit 1d0a10c8a48de51e122821e2c3b47b0fff83ac7f
Author: devozerov 
Date:   2016-12-26T11:23:41Z

Fixes after merge.

commit 1883359d5ddc8deaddebb35b30e135ad18333070
Author: devozerov 
Date:   2016-12-26T11:23:51Z

Fixes after merge.

commit 13e514e99a715ce8a276c7fd1aa12f384a727015
Author: devozerov 
Date:   2016-12-26T11:31:45Z

Fixes after review.

commit 34aef7c17a59b9fdcc3df70382cb84dca1577a4b
Author: devozerov 
Date:   2016-12-26T11:39:58Z

Minors.

commit b39ae48535ed5ca1ec0bab7b8d64d6b1fe5d3715
Author: devozerov 
Date:   2016-12-26T11:42:07Z

Minors.

commit 87f103f92aeda9939e536b878f281b57aa1c31f2
Author: devozerov 
Date:   2016-12-26T11:44:02Z

Minors.

commit af5e0482d767bd79fa152c486d991628d928827c
Author: devozerov 
Date:   2016-12-26T11:47:26Z

Minors.




> Create separate thread pool for data streamer.
> --
>
> Key: IGNITE-3875
> URL: https://issues.apache.org/jira/browse/IGNITE-3875
> Project: Ignite
>  Issue Type: Task
>  Components: streaming
>Affects Versions: 1.7
>Reporter: Vladimir Ozerov
>Assignee: Vladimir Ozerov
> Fix For: 2.0
>
>
> Currently data streamer requests are submitted to PUBLIC pool. Because of 
> this it is not safe to run streamer from compute tasks which is 

[jira] [Updated] (IGNITE-4491) Commutation loss between two nodes leads to hang whole cluster.

2016-12-26 Thread Vladislav Pyatkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-4491:
--
Description: 
Reproduction steps:
1) Start nodes:

{noformat}
DC1   DC2

1 (10.116.172.1)  8 (10.116.64.11)
2 (10.116.172.2)  7 (10.116.64.12)
3 (10.116.172.3)  6 (10.116.64.13)
4 (10.116.172.4)  5 (10.116.64.14)
{noformat}

each node have client which run in same host with server (look source in 
attachment).

2) Drop connection

Between 1-8,
{noformat}
1 (10.116.172.1)  8 (10.116.64.11)
{noformat}

Drop all input and output traffic
Invoke from 10.116.172.1
{code}
iptables -A INPUT -s 10.116.64.11 -j DROP
iptables -A OUTPUT -d 10.116.64.11 -j DROP
{code}

Between  4-5

{noformat}
4 (10.116.172.4)  5 (10.116.64.14)
{noformat}

Invoke from 10.116.172.4
{code}
iptables -A INPUT -s 10.116.64.14 -j DROP
iptables -A OUTPUT -d 10.116.64.14 -j DROP
{code}

3) Stop the grid, after several seconds

If you are looking into logs, you can find which node was segmented (pay 
attention, which clients did not segmented), after drop traffic:
{noformat}
[12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]
{noformat}

And all operations stopped at the same time.

  was:
Reproduction steps:
1) Start nodes:

{noformat}
DC1   DC2

1 (10.116.172.1)  8 (10.116.64.11)
2 (10.116.172.2)  7 (10.116.64.12)
3 (10.116.172.3)  6 (10.116.64.13)
4 (10.116.172.4)  5 (10.116.64.14)
{noformat}

each node have client which run in same host with server (look source in 
attachment).

2) Drop connection

Between 1-8,
{noformat}
1 (10.116.172.1)  8 (10.116.64.11)
{noformat}

Drop all input and output traffic
Invoke from 10.116.172.1
{code}
iptables -A INPUT -s 10.116.64.11 -j DROP
iptables -A OUTPUT -d 10.116.64.11 -j DROP
{code}

Between  4-5

{noformat}
4 (10.116.172.4)  5 (10.116.64.14)
{noformat}

Invoke from 10.116.172.4
{code}
iptables -A INPUT -s 10.116.64.14 -j DROP
iptables -A OUTPUT -d 10.116.64.14 -j DROP
{code}

3) Stop the grid, after several seconds

If you are looking into logs, you can find which node was segmented (pay 
attention, which clients did not segmented.), after drop traffic:
{noformat}
[12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]
{noformat}

And all operations stopped at the same time.


> Commutation loss between two nodes leads to hang whole cluster.
> ---
>
> Key: IGNITE-4491
> URL: https://issues.apache.org/jira/browse/IGNITE-4491
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.8
>Reporter: Vladislav Pyatkov
>Priority: Critical
> Attachments: Segmentation.7z
>
>
> Reproduction steps:
> 1) Start nodes:
> {noformat}
> DC1   DC2
> 1 (10.116.172.1)  8 (10.116.64.11)
> 2 (10.116.172.2)  7 (10.116.64.12)
> 3 (10.116.172.3)  6 (10.116.64.13)
> 4 (10.116.172.4)  5 (10.116.64.14)
> {noformat}
> each node have client which run in same host with server (look source in 
> attachment).
> 2) Drop connection
> Between 1-8,
> {noformat}
> 1 (10.116.172.1)  8 (10.116.64.11)
> {noformat}
> Drop all input and output traffic
> Invoke from 10.116.172.1
> {code}
> iptables -A INPUT -s 10.116.64.11 -j DROP
> iptables -A OUTPUT -d 10.116.64.11 -j DROP
> {code}
> Between  4-5
> {noformat}
> 4 (10.116.172.4)  5 (10.116.64.14)
> {noformat}
> Invoke from 10.116.172.4
> {code}
> iptables -A INPUT -s 10.116.64.14 -j DROP
> iptables -A OUTPUT -d 10.116.64.14 -j DROP
> {code}
> 3) Stop the grid, after several seconds
> If you are looking into logs, you can find which node was segmented (pay 
> attention, which clients did not segmented), after drop traffic:
> {noformat}
> [12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
> Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]
> {noformat}
> And all operations stopped at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4491) Commutation loss between two nodes leads to hang whole cluster

2016-12-26 Thread Vladislav Pyatkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-4491:
--
Summary: Commutation loss between two nodes leads to hang whole cluster  
(was: Commutation loss between two nodes leads to hang whole cluster.)

> Commutation loss between two nodes leads to hang whole cluster
> --
>
> Key: IGNITE-4491
> URL: https://issues.apache.org/jira/browse/IGNITE-4491
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.8
>Reporter: Vladislav Pyatkov
>Priority: Critical
> Attachments: Segmentation.7z
>
>
> Reproduction steps:
> 1) Start nodes:
> {noformat}
> DC1   DC2
> 1 (10.116.172.1)  8 (10.116.64.11)
> 2 (10.116.172.2)  7 (10.116.64.12)
> 3 (10.116.172.3)  6 (10.116.64.13)
> 4 (10.116.172.4)  5 (10.116.64.14)
> {noformat}
> each node have client which run in same host with server (look source in 
> attachment).
> 2) Drop connection
> Between 1-8,
> {noformat}
> 1 (10.116.172.1)  8 (10.116.64.11)
> {noformat}
> Drop all input and output traffic
> Invoke from 10.116.172.1
> {code}
> iptables -A INPUT -s 10.116.64.11 -j DROP
> iptables -A OUTPUT -d 10.116.64.11 -j DROP
> {code}
> Between  4-5
> {noformat}
> 4 (10.116.172.4)  5 (10.116.64.14)
> {noformat}
> Invoke from 10.116.172.4
> {code}
> iptables -A INPUT -s 10.116.64.14 -j DROP
> iptables -A OUTPUT -d 10.116.64.14 -j DROP
> {code}
> 3) Stop the grid, after several seconds
> If you are looking into logs, you can find which node was segmented (pay 
> attention, which clients did not segmented), after drop traffic:
> {noformat}
> [12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
> Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]
> {noformat}
> And all operations stopped at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4491) Commutation loss between two nodes leads to hang whole cluster.

2016-12-26 Thread Vladislav Pyatkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-4491:
--
Description: 
Reproduction steps:
1) Start nodes:

{noformat}
DC1   DC2

1 (10.116.172.1)  8 (10.116.64.11)
2 (10.116.172.2)  7 (10.116.64.12)
3 (10.116.172.3)  6 (10.116.64.13)
4 (10.116.172.4)  5 (10.116.64.14)
{noformat}

each node have client which run in same host with server (look source in 
attachment).

2) Drop connection

Between 1-8,
{noformat}
1 (10.116.172.1)  8 (10.116.64.11)
{noformat}

Drop all input and output traffic
Invoke from 10.116.172.1
{code}
iptables -A INPUT -s 10.116.64.11 -j DROP
iptables -A OUTPUT -d 10.116.64.11 -j DROP
{code}

Between  4-5

{noformat}
4 (10.116.172.4)  5 (10.116.64.14)
{noformat}

Invoke from 10.116.172.4
{code}
iptables -A INPUT -s 10.116.64.14 -j DROP
iptables -A OUTPUT -d 10.116.64.14 -j DROP
{code}

3) Stop the grid, after several seconds

If you are looking into logs, you can find which node was segmented (pay 
attention, which clients did not segmented.), after drop traffic:
{noformat}
[12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]
{noformat}

And all operations stopped at the same time.

  was:
Reproduction steps:
1) Start nodes:

{noformat}
DC1   DC2

1 (10.116.172.1)  8 (10.116.64.11)
2 (10.116.172.2)  7 (10.116.64.12)
3 (10.116.172.3)  6 (10.116.64.13)
4 (10.116.172.4)  5 (10.116.64.14)
{noformat}

each node have client which run in same host with server (look source in 
attachment).

2) Drop connection

Between 1-8,
{noformat}
1 (10.116.172.1)  8 (10.116.64.11)
{noformat}

Drop all input and output traffic
Invoke from 10.116.172.1
{noformat}
iptables -A INPUT -s 10.116.64.11 -j DROP
iptables -A OUTPUT -d 10.116.64.11 -j DROP
{noformat}

Between  4-5

{noformat}
4 (10.116.172.4)  5 (10.116.64.14)
{noformat}

Invoke from 10.116.172.4
{noformat}
iptables -A INPUT -s 10.116.64.14 -j DROP
iptables -A OUTPUT -d 10.116.64.14 -j DROP
{noformat}

3) Stop the grid, after several seconds

If you are looking into logs, you can find which node was segmented (pay 
attention, which clients did not segmented.), after drop traffic:
{noformat}
[12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]
{noformat}

And all operations stopped at the same time.


> Commutation loss between two nodes leads to hang whole cluster.
> ---
>
> Key: IGNITE-4491
> URL: https://issues.apache.org/jira/browse/IGNITE-4491
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.8
>Reporter: Vladislav Pyatkov
>Priority: Critical
> Attachments: Segmentation.7z
>
>
> Reproduction steps:
> 1) Start nodes:
> {noformat}
> DC1   DC2
> 1 (10.116.172.1)  8 (10.116.64.11)
> 2 (10.116.172.2)  7 (10.116.64.12)
> 3 (10.116.172.3)  6 (10.116.64.13)
> 4 (10.116.172.4)  5 (10.116.64.14)
> {noformat}
> each node have client which run in same host with server (look source in 
> attachment).
> 2) Drop connection
> Between 1-8,
> {noformat}
> 1 (10.116.172.1)  8 (10.116.64.11)
> {noformat}
> Drop all input and output traffic
> Invoke from 10.116.172.1
> {code}
> iptables -A INPUT -s 10.116.64.11 -j DROP
> iptables -A OUTPUT -d 10.116.64.11 -j DROP
> {code}
> Between  4-5
> {noformat}
> 4 (10.116.172.4)  5 (10.116.64.14)
> {noformat}
> Invoke from 10.116.172.4
> {code}
> iptables -A INPUT -s 10.116.64.14 -j DROP
> iptables -A OUTPUT -d 10.116.64.14 -j DROP
> {code}
> 3) Stop the grid, after several seconds
> If you are looking into logs, you can find which node was segmented (pay 
> attention, which clients did not segmented.), after drop traffic:
> {noformat}
> [12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
> Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]
> {noformat}
> And all operations stopped at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4491) Commutation loss between two nodes leads to hang whole cluster.

2016-12-26 Thread Vladislav Pyatkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-4491:
--
Attachment: Segmentation.7z

> Commutation loss between two nodes leads to hang whole cluster.
> ---
>
> Key: IGNITE-4491
> URL: https://issues.apache.org/jira/browse/IGNITE-4491
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.8
>Reporter: Vladislav Pyatkov
>Priority: Critical
> Attachments: Segmentation.7z
>
>
> Reproduction steps:
> 1) Start nodes:
> {noformat}
> DC1   DC2
> 1 (10.116.172.1)  8 (10.116.64.11)
> 2 (10.116.172.2)  7 (10.116.64.12)
> 3 (10.116.172.3)  6 (10.116.64.13)
> 4 (10.116.172.4)  5 (10.116.64.14)
> {noformat}
> each node have client which run in same host with server (look source in 
> attachment).
> 2) Drop connection
> Between 1-8,
> {noformat}
> 1 (10.116.172.1)  8 (10.116.64.11)
> {noformat}
> Drop all input and output traffic
> Invoke from 10.116.172.1
> {noformat}
> iptables -A INPUT -s 10.116.64.11 -j DROP
> iptables -A OUTPUT -d 10.116.64.11 -j DROP
> {noformat}
> Between  4-5
> {noformat}
> 4 (10.116.172.4)  5 (10.116.64.14)
> {noformat}
> Invoke from 10.116.172.4
> {noformat}
> iptables -A INPUT -s 10.116.64.14 -j DROP
> iptables -A OUTPUT -d 10.116.64.14 -j DROP
> {noformat}
> 3) Stop the grid, after several seconds
> If you are looking into logs, you can find which node was segmented (pay 
> attention, which clients did not segmented.), after drop traffic:
> {noformat}
> [12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
> Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]
> {noformat}
> And all operations stopped at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-4374) Ignite should validate JVM and OS configuration and output warning in log

2016-12-26 Thread VYACHESLAV DARADUR (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

VYACHESLAV DARADUR reassigned IGNITE-4374:
--

Assignee: VYACHESLAV DARADUR

> Ignite should validate JVM and OS configuration and output warning in log
> -
>
> Key: IGNITE-4374
> URL: https://issues.apache.org/jira/browse/IGNITE-4374
> Project: Ignite
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Yakov Zhdanov
>Assignee: VYACHESLAV DARADUR
>
> Currently we have GridPerformanceSuggestions that output suggestions to logs 
> on Ignite start on how Ignite can be improved.
> I suggest to go a little bit deeper and validate more configuration options 
> and add validation for JVM and OS settings.
> Ignite should output warning if:
> * GC logging is not enabled
> * MaxDirectMemorySize is not set (-XX:MaxDirectMemorySize)
> * Heap size is greater than 30,5G and JVM cannot use compressed oops
> * Any of the recommended OS setting described here 
> https://apacheignite.readme.io/docs/jvm-and-system-tuning are not properly 
> set 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4491) Commutation loss between two nodes leads to hang whole cluster.

2016-12-26 Thread Vladislav Pyatkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-4491:
--
Description: 
Reproduction steps:
1) Start nodes:

{noformat}
DC1   DC2

1 (10.116.172.1)  8 (10.116.64.11)
2 (10.116.172.2)  7 (10.116.64.12)
3 (10.116.172.3)  6 (10.116.64.13)
4 (10.116.172.4)  5 (10.116.64.14)
{noformat}

each node have client which run in same host with server (look source in 
attachment).

2) Drop connection

Between 1-8,
{noformat}
1 (10.116.172.1)  8 (10.116.64.11)
{noformat}

Drop all input and output traffic
Invoke from 10.116.172.1
{noformat}
iptables -A INPUT -s 10.116.64.11 -j DROP
iptables -A OUTPUT -d 10.116.64.11 -j DROP
{noformat}

Between  4-5

{noformat}
4 (10.116.172.4)  5 (10.116.64.14)
{noformat}

Invoke from 10.116.172.4
{noformat}
iptables -A INPUT -s 10.116.64.14 -j DROP
iptables -A OUTPUT -d 10.116.64.14 -j DROP
{noformat}

3) Stop the grid, after several seconds

If you are looking into logs, you can find which node was segmented (pay 
attention, which clients did not segmented.), after drop traffic:
{noformat}
[12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]
{noformat}

And all operations stopped at the same time.

  was:
Reproduction steps:
1) Start nodes:

{noformat}
DC1   DC2

1 (10.116.172.1)  8 (10.116.64.11)
2 (10.116.172.2)  7 (10.116.64.12)
3 (10.116.172.3)  6 (10.116.64.13)
4 (10.116.172.4)  5 (10.116.64.14)
{noformat}

each node have client which run in same host with server (look source in 
attachment).

2) Drop connection

Between 1-8,
{noformat}
1 (10.116.172.1)  8 (10.116.64.11)
{noformat}

Drop all input and output traffic
Invoke from 10.116.172.1
iptables -A INPUT -s 10.116.64.11 -j DROP
iptables -A OUTPUT -d 10.116.64.11 -j DROP

Between  4-5

4 (10.116.172.4)  5 (10.116.64.14)

Invoke from 10.116.172.4
iptables -A INPUT -s 10.116.64.14 -j DROP
iptables -A OUTPUT -d 10.116.64.14 -j DROP

3) Stop the grid, after several seconds

If you are looking into logs, you can find which node was segmented (pay 
attention, which clients did not segmented.), after drop traffic:
[12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]

And all operations stopped at the same time.


> Commutation loss between two nodes leads to hang whole cluster.
> ---
>
> Key: IGNITE-4491
> URL: https://issues.apache.org/jira/browse/IGNITE-4491
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.8
>Reporter: Vladislav Pyatkov
>Priority: Critical
>
> Reproduction steps:
> 1) Start nodes:
> {noformat}
> DC1   DC2
> 1 (10.116.172.1)  8 (10.116.64.11)
> 2 (10.116.172.2)  7 (10.116.64.12)
> 3 (10.116.172.3)  6 (10.116.64.13)
> 4 (10.116.172.4)  5 (10.116.64.14)
> {noformat}
> each node have client which run in same host with server (look source in 
> attachment).
> 2) Drop connection
> Between 1-8,
> {noformat}
> 1 (10.116.172.1)  8 (10.116.64.11)
> {noformat}
> Drop all input and output traffic
> Invoke from 10.116.172.1
> {noformat}
> iptables -A INPUT -s 10.116.64.11 -j DROP
> iptables -A OUTPUT -d 10.116.64.11 -j DROP
> {noformat}
> Between  4-5
> {noformat}
> 4 (10.116.172.4)  5 (10.116.64.14)
> {noformat}
> Invoke from 10.116.172.4
> {noformat}
> iptables -A INPUT -s 10.116.64.14 -j DROP
> iptables -A OUTPUT -d 10.116.64.14 -j DROP
> {noformat}
> 3) Stop the grid, after several seconds
> If you are looking into logs, you can find which node was segmented (pay 
> attention, which clients did not segmented.), after drop traffic:
> {noformat}
> [12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
> Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]
> {noformat}
> And all operations stopped at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4491) Commutation loss between two nodes leads to hang whole cluster.

2016-12-26 Thread Vladislav Pyatkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-4491:
--
Description: 
Reproduction steps:
1) Start nodes:

{noformat}
DC1   DC2

1 (10.116.172.1)  8 (10.116.64.11)
2 (10.116.172.2)  7 (10.116.64.12)
3 (10.116.172.3)  6 (10.116.64.13)
4 (10.116.172.4)  5 (10.116.64.14)
{noformat}

each node have client which run in same host with server (look source in 
attachment).

2) Drop connection

Between 1-8,
{noformat}
1 (10.116.172.1)  8 (10.116.64.11)
{noformat}

Drop all input and output traffic
Invoke from 10.116.172.1
iptables -A INPUT -s 10.116.64.11 -j DROP
iptables -A OUTPUT -d 10.116.64.11 -j DROP

Between  4-5

4 (10.116.172.4)  5 (10.116.64.14)

Invoke from 10.116.172.4
iptables -A INPUT -s 10.116.64.14 -j DROP
iptables -A OUTPUT -d 10.116.64.14 -j DROP

3) Stop the grid, after several seconds

If you are looking into logs, you can find which node was segmented (pay 
attention, which clients did not segmented.), after drop traffic:
[12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]

And all operations stopped at the same time.

  was:
Reproduction steps:
1) Start nodes:

DC1   DC2

1 (10.116.172.1)  8 (10.116.64.11)
2 (10.116.172.2)  7 (10.116.64.12)
3 (10.116.172.3)  6 (10.116.64.13)
4 (10.116.172.4)  5 (10.116.64.14)

each node have client which run in same host with server (look source in 
attachment).

2) Drop connection

Between 1-8,

1 (10.116.172.1)  8 (10.116.64.11)

Drop all input and output traffic
Invoke from 10.116.172.1
iptables -A INPUT -s 10.116.64.11 -j DROP
iptables -A OUTPUT -d 10.116.64.11 -j DROP

Between  4-5

4 (10.116.172.4)  5 (10.116.64.14)

Invoke from 10.116.172.4
iptables -A INPUT -s 10.116.64.14 -j DROP
iptables -A OUTPUT -d 10.116.64.14 -j DROP

3) Stop the grid, after several seconds

If you are looking into logs, you can find which node was segmented (pay 
attention, which clients did not segmented.), after drop traffic:
[12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]

And all operations stopped at the same time.


> Commutation loss between two nodes leads to hang whole cluster.
> ---
>
> Key: IGNITE-4491
> URL: https://issues.apache.org/jira/browse/IGNITE-4491
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.8
>Reporter: Vladislav Pyatkov
>Priority: Critical
>
> Reproduction steps:
> 1) Start nodes:
> {noformat}
> DC1   DC2
> 1 (10.116.172.1)  8 (10.116.64.11)
> 2 (10.116.172.2)  7 (10.116.64.12)
> 3 (10.116.172.3)  6 (10.116.64.13)
> 4 (10.116.172.4)  5 (10.116.64.14)
> {noformat}
> each node have client which run in same host with server (look source in 
> attachment).
> 2) Drop connection
> Between 1-8,
> {noformat}
> 1 (10.116.172.1)  8 (10.116.64.11)
> {noformat}
> Drop all input and output traffic
> Invoke from 10.116.172.1
> iptables -A INPUT -s 10.116.64.11 -j DROP
> iptables -A OUTPUT -d 10.116.64.11 -j DROP
> Between  4-5
> 4 (10.116.172.4)  5 (10.116.64.14)
> Invoke from 10.116.172.4
> iptables -A INPUT -s 10.116.64.14 -j DROP
> iptables -A OUTPUT -d 10.116.64.14 -j DROP
> 3) Stop the grid, after several seconds
> If you are looking into logs, you can find which node was segmented (pay 
> attention, which clients did not segmented.), after drop traffic:
> [12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
> Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]
> And all operations stopped at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4491) Commutation loss between two nodes leads to hang whole cluster.

2016-12-26 Thread Vladislav Pyatkov (JIRA)
Vladislav Pyatkov created IGNITE-4491:
-

 Summary: Commutation loss between two nodes leads to hang whole 
cluster.
 Key: IGNITE-4491
 URL: https://issues.apache.org/jira/browse/IGNITE-4491
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.8
Reporter: Vladislav Pyatkov
Priority: Critical


Reproduction steps:
1) Start nodes:

DC1   DC2

1 (10.116.172.1)  8 (10.116.64.11)
2 (10.116.172.2)  7 (10.116.64.12)
3 (10.116.172.3)  6 (10.116.64.13)
4 (10.116.172.4)  5 (10.116.64.14)

each node have client which run in same host with server (look source in 
attachment).

2) Drop connection

Between 1-8,

1 (10.116.172.1)  8 (10.116.64.11)

Drop all input and output traffic
Invoke from 10.116.172.1
iptables -A INPUT -s 10.116.64.11 -j DROP
iptables -A OUTPUT -d 10.116.64.11 -j DROP

Between  4-5

4 (10.116.172.4)  5 (10.116.64.14)

Invoke from 10.116.172.4
iptables -A INPUT -s 10.116.64.14 -j DROP
iptables -A OUTPUT -d 10.116.64.14 -j DROP

3) Stop the grid, after several seconds

If you are looking into logs, you can find which node was segmented (pay 
attention, which clients did not segmented.), after drop traffic:
[12:04:33,914][INFO][disco-event-worker-#211%null%][GridDiscoveryManager] 
Topology snapshot [ver=18, servers=6, clients=8, CPUs=456, heap=68.0GB]

And all operations stopped at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (IGNITE-4441) Define plugin API in .NET

2016-12-26 Thread Pavel Tupitsyn (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-4441:
---
Comment: was deleted

(was: 1) As explained in the docs, this is not needed. Plugin author can always 
subscribe to {{IIgnite.Stopping}} and {{IIgnite.Stopped}}. No reason to force 
plugin authors to implement one more method.)

> Define plugin API in .NET
> -
>
> Key: IGNITE-4441
> URL: https://issues.apache.org/jira/browse/IGNITE-4441
> Project: Ignite
>  Issue Type: Sub-task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 2.0
>
>
> Define plugin API in .NET similar to Java API:
> * {{IgniteConfiguration.PluginConfigurations}}
> * {{IPluginProvider}}
> * {{IPluginContext}}
> Should work like this:
> * Plugin author implements {{IPluginProvider}}
> * We discover plugins on Ignite start by examining all DLL files in the 
> folder, load DLLs where {{IPluginProvider}} implementations are present, 
> instantiate these implementations, and call 
> {{IPluginProvider.Start(IPluginContext)}} method.
> * Plugin user can retrieve plugin via {{IIgnite.GetPlugin(string name)}}, 
> or via helper extension method provided by plugin author.
> This task does not include the possibility to interact with Java from the 
> plugin code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-1943) ignitevisorcmd: wrong behavior for "mclear" command

2016-12-26 Thread Andrey Novikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-1943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778131#comment-15778131
 ] 

Andrey Novikov commented on IGNITE-1943:


Hi [~samaitra],
I reviewed your pull request and find:
* Command "mclear" still can be executed from console;
* "mcompact" command does not process: host, cache, alert variables;
* For node variables "mcompact" command may also change order.


> ignitevisorcmd: wrong behavior for "mclear" command
> ---
>
> Key: IGNITE-1943
> URL: https://issues.apache.org/jira/browse/IGNITE-1943
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 1.5.0.final
>Reporter: Vasilisa  Sidorova
>Assignee: Saikat Maitra
> Fix For: 2.0
>
>
> -
> DESCRIPTION
> -
> "mclear" should clears all Visor console variables before they will be 
> updated during next run of the corresponding command. It OK for all type of 
> shortcut (@cX, @tX, @aX, @eX) exept shortcut for node-id variables. After 
> "mclear" @nX are disappear from any statistics 
> -
> STEPS FOR REPRODUCE
> -
> # Start cluster
> # Start visorcmd ([IGNITE_HOME]/bin/ignitevisorcmd.sh)
> # Connect to the started cluster (visor> open)
> # Execute any tasks in it (for example, run any example from EventsExample, 
> DeploymentExample, ComputeContinuousMapperExample, ComputeTaskMapExample, 
> ComputeTaskSplitExample) 
> # Create several alerts (visor> alert)
> # Run visor "mclear" command
> # Run visor "node" command
> -
> ACTUAL RESULT
> -
> There isn't @nX values in the first table's column:
> {noformat}
> visor> node
> Select node from:
> +==+
> | # |   Node ID8(@), IP   | Up Time  | CPUs | CPU Load | Free Heap |
> +==+
> | 0 | 6B3E4033, 127.0.0.1 | 00:38:41 | 8| 0.20 %   | 97.00 %   |
> | 1 | 0A0D6989, 127.0.0.1 | 00:38:22 | 8| 0.17 %   | 97.00 %   |
> | 2 | 0941E33F, 127.0.0.1 | 00:35:46 | 8| 0.23 %   | 97.00 %   |
> +--+
> {noformat}
> -
> EXPECTED RESULT
> -
> There is @nX values in the first table's column:
> {noformat}
> visor> node
> Select node from:
> +===+
> | # | Node ID8(@), IP  | Up Time  | CPUs | CPU Load | Free Heap |
> +===+
> | 0 | 6B3E4033(@n0), 127.0.0.1 | 00:38:13 | 8| 0.23 %   | 97.00 %   |
> | 1 | 0A0D6989(@n1), 127.0.0.1 | 00:37:54 | 8| 0.27 %   | 97.00 %   |
> | 2 | 0941E33F(@n2), 127.0.0.1 | 00:35:18 | 8| 0.20 %   | 97.00 %   |
> +---+
> {noformat}
> "mclear" should get effect only for "mget @nX" command (with result "Missing 
> variable with name: '@nX'") and @nX variables  should get new values after 
> "node" (or "tasks") command execution. Like in case for all others @ 
> variables (@cX, @tX, @aX, @eX) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4441) Define plugin API in .NET

2016-12-26 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778100#comment-15778100
 ] 

Pavel Tupitsyn commented on IGNITE-4441:


1) As explained in the docs, this is not needed. Plugin author can always 
subscribe to {{IIgnite.Stopping}} and {{IIgnite.Stopped}}. No reason to force 
plugin authors to implement one more method.

> Define plugin API in .NET
> -
>
> Key: IGNITE-4441
> URL: https://issues.apache.org/jira/browse/IGNITE-4441
> Project: Ignite
>  Issue Type: Sub-task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 2.0
>
>
> Define plugin API in .NET similar to Java API:
> * {{IgniteConfiguration.PluginConfigurations}}
> * {{IPluginProvider}}
> * {{IPluginContext}}
> Should work like this:
> * Plugin author implements {{IPluginProvider}}
> * We discover plugins on Ignite start by examining all DLL files in the 
> folder, load DLLs where {{IPluginProvider}} implementations are present, 
> instantiate these implementations, and call 
> {{IPluginProvider.Start(IPluginContext)}} method.
> * Plugin user can retrieve plugin via {{IIgnite.GetPlugin(string name)}}, 
> or via helper extension method provided by plugin author.
> This task does not include the possibility to interact with Java from the 
> plugin code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4480) Incorrect cancellation semantics in IgniteFuture

2016-12-26 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778093#comment-15778093
 ] 

Vladimir Ozerov commented on IGNITE-4480:
-

See {{GridFutureAdapter.cancel()}} - it does nothing, why? It means that 
children classes are free to ignore cancellation, what looks wrong to me.

> Incorrect cancellation semantics in IgniteFuture
> 
>
> Key: IGNITE-4480
> URL: https://issues.apache.org/jira/browse/IGNITE-4480
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
> Fix For: 2.0
>
>
> *Problem*
> Normally, if user invoke "cancel()" on future, he expects that it will be 
> completed immediately. E.g. this is how JDK {{CompletableFuture}} works. 
> However, this is not the case for Ignite - we inform user about cancellation 
> through flag, which is also not set properly in most cases.
> *Solution*
> 1) {{cancel()}} must complete future with special "CancellationException" 
> immediately.
> 2) {{isCancelled()}} method should be either removed or deprecated.
> 3{ We should add {{isCompletedExceptionally()}} method which will return 
> {{true}} in case future was completed with any kind of exception. This will 
> work similar to JDK {{CompletableFuture}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4476) GridIoManger must always process messages asynchronously

2016-12-26 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778085#comment-15778085
 ] 

Vladimir Ozerov commented on IGNITE-4476:
-

Personally I observed in GridGain plugin. Yes, it is proprietary code, but it 
demonstrated the problem nicely. In short, we have the following pseudocode 
there:

{code}
rwLock.readLock();
GridIoManger.send(...);
rwLock.readUnlock();
{code}

We expect that this code piece will be executed quickly, and read lock will be 
released. However, this piece of code is executed synchronously, what leads to 
very dready deadlock on {{readLock <- writeLock <- readLock}} chain.

That is, we have hidden deadlocks in a product due to this optimization. I am 
not talking about striped pool, it is out of a question here. My point is that 
if message should be executed in pool X by design, it should be executed in 
this pool irrespectively of whether this is local or remote message.


> GridIoManger must always process messages asynchronously
> 
>
> Key: IGNITE-4476
> URL: https://issues.apache.org/jira/browse/IGNITE-4476
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
> Fix For: 2.0
>
>
> *Problem*
> If message is to be sent to remote node, we just send it (surprise :-)). But 
> if message is to be sent to local node, we have a strange "optimization" - we 
> process it synchronously in the same thread. 
> This is wrong as it easily leads to all kind of weird starvations and 
> deadlocks.
> *Solution*
> Never ever process messages synchronously. For local node we should put 
> message runnable into appropriate thread pool and exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2793) ODBC: Add support for Arrays.

2016-12-26 Thread Igor Sapego (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778063#comment-15778063
 ] 

Igor Sapego commented on IGNITE-2793:
-

Sergey,

Seems like there are conflicts when trying to merge your PR to master. Please, 
re-merge with master and submit for review again.

> ODBC: Add support for Arrays.
> -
>
> Key: IGNITE-2793
> URL: https://issues.apache.org/jira/browse/IGNITE-2793
> Project: Ignite
>  Issue Type: Task
>  Components: odbc
>Affects Versions: 1.5.0.final
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>  Labels: roadmap
> Fix For: 2.0
>
>
> Support for arrays should be added. We need to at least support byte arrays 
> to match {{SQL_C_BINARY}} type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (IGNITE-4464) JclLogger ignores IGNITE_QUIET system property

2016-12-26 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov closed IGNITE-4464.
---

> JclLogger ignores IGNITE_QUIET system property
> --
>
> Key: IGNITE-4464
> URL: https://issues.apache.org/jira/browse/IGNITE-4464
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 1.7
>Reporter: Andrew Mashenkov
>  Labels: easyfix, newbie
> Fix For: 2.0
>
>
> {{JclLogger}} writes in quiet mode into {{System.out}} regardless of the 
> value of {{IGNITE_QUEIT}} system property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-4464) JclLogger ignores IGNITE_QUIET system property

2016-12-26 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov resolved IGNITE-4464.
-
Resolution: Duplicate

> JclLogger ignores IGNITE_QUIET system property
> --
>
> Key: IGNITE-4464
> URL: https://issues.apache.org/jira/browse/IGNITE-4464
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 1.7
>Reporter: Andrew Mashenkov
>  Labels: easyfix, newbie
> Fix For: 2.0
>
>
> {{JclLogger}} writes in quiet mode into {{System.out}} regardless of the 
> value of {{IGNITE_QUEIT}} system property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-1680) CPP: Implement basic API for user entry point lookup

2016-12-26 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-1680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778033#comment-15778033
 ] 

Vladimir Ozerov commented on IGNITE-1680:
-

Igor, my comments:

1) Let's add this manager to public {{Ignite}} API, so that user is able to 
register his own extensions in runtime.
2) Extensions should be registered on per-Ignite bases, i.e. no static.
3) Let's make sure that we have sensible names. E.g. {{IgniteExtensions 
Extensions()}}, {{IgniteRpc rpc()}}, {{IgniteCallbacks callbacks}}, etc..
4) We should carefully think on possible user plugins. In future we will 
certainly add plugin API similar to the one we have currently in Java. So what 
if user wants to register his own callback,  and you do not know it's signature 
in advance? So we should provide some "raw" API as well.
5) Any chance that we will be able to specify extensions in config in text 
form? Or it is impossible without "refletion" in C++, and we will have to 
fallback to DLL loading? 


> CPP: Implement basic API for user entry point lookup
> 
>
> Key: IGNITE-1680
> URL: https://issues.apache.org/jira/browse/IGNITE-1680
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Affects Versions: 1.1.4
>Reporter: Igor Sapego
>Assignee: Vladimir Ozerov
>  Labels: cpp, roadmap
> Fix For: 2.0
>
>
> Need to implement IgniteCompute class for C++ with one basic method 
> IgniteCompute::Call(...) which will provide basic remote job execution API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4476) GridIoManger must always process messages asynchronously

2016-12-26 Thread Yakov Zhdanov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778013#comment-15778013
 ] 

Yakov Zhdanov commented on IGNITE-4476:
---

Vova, please provide example when this behavior leads to a deadlock.

I would agree that we need to put message processing to a stripe when we move 
to proper thread per partition but this should be done when partition is being 
sent to remote node during rebalancing.

> GridIoManger must always process messages asynchronously
> 
>
> Key: IGNITE-4476
> URL: https://issues.apache.org/jira/browse/IGNITE-4476
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
> Fix For: 2.0
>
>
> *Problem*
> If message is to be sent to remote node, we just send it (surprise :-)). But 
> if message is to be sent to local node, we have a strange "optimization" - we 
> process it synchronously in the same thread. 
> This is wrong as it easily leads to all kind of weird starvations and 
> deadlocks.
> *Solution*
> Never ever process messages synchronously. For local node we should put 
> message runnable into appropriate thread pool and exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4480) Incorrect cancellation semantics in IgniteFuture

2016-12-26 Thread Yakov Zhdanov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15777998#comment-15777998
 ] 

Yakov Zhdanov commented on IGNITE-4480:
---

Can you please add an example of this behavior?

>From what I see internal futures should be completed immediately after 
>onCancelled() is called. In this case get() should throw converted exception 
>IgniteFutureCancelledCheckedException->IgniteFutureCancelledException

> Incorrect cancellation semantics in IgniteFuture
> 
>
> Key: IGNITE-4480
> URL: https://issues.apache.org/jira/browse/IGNITE-4480
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
> Fix For: 2.0
>
>
> *Problem*
> Normally, if user invoke "cancel()" on future, he expects that it will be 
> completed immediately. E.g. this is how JDK {{CompletableFuture}} works. 
> However, this is not the case for Ignite - we inform user about cancellation 
> through flag, which is also not set properly in most cases.
> *Solution*
> 1) {{cancel()}} must complete future with special "CancellationException" 
> immediately.
> 2) {{isCancelled()}} method should be either removed or deprecated.
> 3{ We should add {{isCompletedExceptionally()}} method which will return 
> {{true}} in case future was completed with any kind of exception. This will 
> work similar to JDK {{CompletableFuture}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4487) NPE on query execution

2016-12-26 Thread Alexander Menshikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15777996#comment-15777996
 ] 

Alexander Menshikov commented on IGNITE-4487:
-

I found that: in inner class 
'GridCacheQueryAdapter.ScanQueryFallbackClosableIterator' in constructor is 
called with method 'init()', but method 'init()' cannot be called with an empty 
field 'nodes'. In source code it looks like:

{code:java}
private ScanQueryFallbackClosableIterator(int part, GridCacheQueryAdapter qry,
GridCacheQueryManager qryMgr, GridCacheContext cctx) {
this.qry = qry;
this.qryMgr = qryMgr;
this.cctx = cctx;
this.part = part;

nodes = fallbacks(cctx.discovery().topologyVersionEx());
// !!! Here nodes.isEmpty()==true, and init() will fail in the 
future. !!!
init();
}
{code}

I can fix it by adding some check in code, but i must know what behavior are 
best in this case? As I understand it, the list of nodes is empty if there are 
no nodes with the current partition, which means data loss, and either need to 
return a meaningful exception, or ignore this situation. But maybe I missed 
something.

> NPE on query execution
> --
>
> Key: IGNITE-4487
> URL: https://issues.apache.org/jira/browse/IGNITE-4487
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 1.8
>Reporter: Dmitry Karachentsev
> Fix For: 2.0
>
> Attachments: IgniteThread.java, Main.java
>
>
> NPE may be thrown when called destroyCache() and started querying.
> Attached example reproduces this case when 
> GridDiscoveryManager#removeCacheFilter called but cache state haven't been 
> changed to STOPPED 
> org.apache.ignite.internal.processors.cache.GridCacheGateway#onStopped.
> {code:none}
> javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: 
> null
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:740)
> at 
> com.intellica.evam.engine.event.future.FutureEventWorker.processFutureEvents(FutureEventWorker.java:117)
> at 
> com.intellica.evam.engine.event.future.FutureEventWorker.run(FutureEventWorker.java:66)
> Caused by: class org.apache.ignite.IgniteCheckedException: null
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:1693)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:494)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:732)
> ... 2 more
> Caused by: java.lang.NullPointerException
> at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.init(GridCacheQueryAdapter.java:712)
> at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.(GridCacheQueryAdapter.java:677)
> at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.(GridCacheQueryAdapter.java:628)
> at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter.executeScanQuery(GridCacheQueryAdapter.java:548)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy$2.applyx(IgniteCacheProxy.java:497)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy$2.applyx(IgniteCacheProxy.java:495)
> at 
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:1670)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)