[jira] [Updated] (IGNITE-4395) Implement communication backpressure per policy - SYSTEM or PUBLIC

2016-12-07 Thread Dmitry Karachentsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Karachentsev updated IGNITE-4395:

Description: 
1) Start two data nodes with some cache.
2) From one node in async mode post some big number of jobs to another. That 
jobs do some cache operations.
3) Grid hangs almost immediately and all threads are sleeping except public 
ones, they are waiting for response.

This happens because all cache and job messages are queued on communication and 
limited with default number (1024). It looks like jobs are waiting for cache 
responses that could not be received due to this limit.

Proper solution here is to have communication backpressure per policy -
SYSTEM or PUBLIC, but not single point as it is now. It could be achieved
with having two queues per communication session or (which looks a bit
easier to implement) to have separate connections.

[PR#1331|https://github.com/apache/ignite/pull/1331] with test that leads to 
grid hang.

  was:
1) Start two data nodes with some cache.
2) From one node in async mode post some big number of jobs to another. That 
jobs do some cache operations.
3) Grid hangs almost immediately and all threads are sleeping except public 
ones, they are waiting for response.

This happens because all cache and job messages are queued on communication and 
limited with default number (1024). It looks like jobs are waiting for cache 
responses that could not be received due to this limit.

Proper solution here is to have communication backpressure per policy -
SYSTEM or PUBLIC, but not single point as it is now. It could be achieved
with having two queues per communication session or (which looks a bit
easier to implement) to have separate connections.


> Implement communication backpressure per policy - SYSTEM or PUBLIC
> --
>
> Key: IGNITE-4395
> URL: https://issues.apache.org/jira/browse/IGNITE-4395
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, compute
>Affects Versions: 1.7
>Reporter: Dmitry Karachentsev
>
> 1) Start two data nodes with some cache.
> 2) From one node in async mode post some big number of jobs to another. That 
> jobs do some cache operations.
> 3) Grid hangs almost immediately and all threads are sleeping except public 
> ones, they are waiting for response.
> This happens because all cache and job messages are queued on communication 
> and limited with default number (1024). It looks like jobs are waiting for 
> cache responses that could not be received due to this limit.
> Proper solution here is to have communication backpressure per policy -
> SYSTEM or PUBLIC, but not single point as it is now. It could be achieved
> with having two queues per communication session or (which looks a bit
> easier to implement) to have separate connections.
> [PR#1331|https://github.com/apache/ignite/pull/1331] with test that leads to 
> grid hang.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4395) Implement communication backpressure per policy - SYSTEM or PUBLIC

2016-12-07 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4395:
---

 Summary: Implement communication backpressure per policy - SYSTEM 
or PUBLIC
 Key: IGNITE-4395
 URL: https://issues.apache.org/jira/browse/IGNITE-4395
 Project: Ignite
  Issue Type: Improvement
  Components: cache, compute
Affects Versions: 1.7
Reporter: Dmitry Karachentsev


1) Start two data nodes with some cache.
2) From one node in async mode post some big number of jobs to another. That 
jobs do some cache operations.
3) Grid hangs almost immediately and all threads are sleeping except public 
ones, they are waiting for response.

This happens because all cache and job messages are queued on communication and 
limited with default number (1024). It looks like jobs are waiting for cache 
responses that could not be received due to this limit.

Proper solution here is to have communication backpressure per policy -
SYSTEM or PUBLIC, but not single point as it is now. It could be achieved
with having two queues per communication session or (which looks a bit
easier to implement) to have separate connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4378) Affinity function should support assigning partition to subset of cluster nodes

2016-12-07 Thread Alexei Scherbakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexei Scherbakov updated IGNITE-4378:
--
Description: 
Currently both default affinity function(AF) implementations randomly choose 
primary node among all topology nodes.

This may not be enough to handle complex data placement scenarios without 
implementing own AF.

On example, some partitions can be assigned to more powerful hardware, or 
limited to subset of cluster nodes due to ease of management or fault tolerance 
scenarios.

We should implement node filter, which will allow to choose subset of cluster 
nodes to place primary and backup partitions.

With already existing ability to filter backup nodes (using 
{{AffinityBackupFilter}}) it will allow to implement different approaches to 
data placement with Ignite without resorting to custom AF.

It's also desirable to include a practical example of both topology filters 
based on node attribute values.

Proposed primary filter interface is below.

{noformat}
/**
 * Allows partition placement to subset of cluster node.
 *
 * Backup nodes also will be assigned from the subset.
 */
public interface AffinityPrimaryFilter extends IgniteBiClosure {
/**
 * Return nodes allowed to contain given partition.
 * @param partition Partition.
 * @param currentTopologyNodes All nodes from current topology.
 * @return Subset of nodes.
 */
@Override public List apply(Integer partition, 
List currentTopologyNodes);
}
{noformat}


  was:
Currently both default affinity function(AF) implementations randomly choose 
primary partition among all topology nodes.

This may not be enough to handle complex data placement scenarios without 
implementing own AF.

On example, some partitions can be assigned to more powerful hardware, or 
limited to subset of cluster nodes due to ease of management or fault tolerance 
scenarios.

We should implement node filter, which will allow to choose subset of cluster 
nodes to place primary and backup partitions.

With already existing ability to filter backup nodes (using 
{{AffinityBackupFilter}}) it will allow to implement different approaches to 
data placement with Ignite without resorting to custom AF.

It's also desirable to include a practical example of both topology filters 
based on node attribute values.

Proposed primary filter interface is below.

{noformat}
/**
 * Allows partition placement to subset of cluster node.
 *
 * Backup nodes also will be assigned from the subset.
 */
public interface AffinityPrimaryFilter extends IgniteBiClosure {
/**
 * Return nodes allowed to contain given partition.
 * @param partition Partition.
 * @param currentTopologyNodes All nodes from current topology.
 * @return Subset of nodes.
 */
@Override public List apply(Integer partition, 
List currentTopologyNodes);
}
{noformat}



> Affinity function should support assigning partition to subset of cluster 
> nodes
> ---
>
> Key: IGNITE-4378
> URL: https://issues.apache.org/jira/browse/IGNITE-4378
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Dmitriy Setrakyan
>Assignee: Alexei Scherbakov
> Fix For: 2.0
>
>
> Currently both default affinity function(AF) implementations randomly choose 
> primary node among all topology nodes.
> This may not be enough to handle complex data placement scenarios without 
> implementing own AF.
> On example, some partitions can be assigned to more powerful hardware, or 
> limited to subset of cluster nodes due to ease of management or fault 
> tolerance scenarios.
> We should implement node filter, which will allow to choose subset of cluster 
> nodes to place primary and backup partitions.
> With already existing ability to filter backup nodes (using 
> {{AffinityBackupFilter}}) it will allow to implement different approaches to 
> data placement with Ignite without resorting to custom AF.
> It's also desirable to include a practical example of both topology filters 
> based on node attribute values.
> Proposed primary filter interface is below.
> {noformat}
> /**
>  * Allows partition placement to subset of cluster node.
>  *
>  * Backup nodes also will be assigned from the subset.
>  */
> public interface AffinityPrimaryFilter extends IgniteBiClosure List, List> {
> /**
>  * Return nodes allowed to contain given partition.
>  * @param partition Partition.
>  * @param currentTopologyNodes All nodes from current topology.
>  * @return Subset of nodes.
>  */
> @Override public List apply(Integer partition, 
> List currentTopologyNodes);
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4378) Affinity function should support assigning partition to subset of cluster nodes

2016-12-07 Thread Alexei Scherbakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexei Scherbakov updated IGNITE-4378:
--
Description: 
Currently both default affinity function(AF) implementations randomly choose 
primary partition among all topology nodes.

This may not be enough to handle complex data placement scenarios without 
implementing own AF.

On example, some partitions can be assigned to more powerful hardware, or 
limited to subset of cluster nodes due to ease of management or fault tolerance 
scenarios.

We should implement node filter, which will allow to choose subset of cluster 
nodes to place primary and backup partitions.

With already existing ability to filter backup nodes (using 
{{AffinityBackupFilter}}) it will allow to implement different approaches to 
data placement with Ignite without resorting to custom AF.

It's also desirable to include a practical example of both topology filters 
based on node attribute values.

Proposed primary filter interface is below.

{noformat}
/**
 * Allows partition placement to subset of cluster node.
 *
 * Backup nodes also will be assigned from the subset.
 */
public interface AffinityPrimaryFilter extends IgniteBiClosure {
/**
 * Return nodes allowed to contain given partition.
 * @param partition Partition.
 * @param currentTopologyNodes All nodes from current topology.
 * @return Subset of nodes.
 */
@Override public List apply(Integer partition, 
List currentTopologyNodes);
}
{noformat}


  was:
# Every node receives a set of attributes on startup
# Affinity function should receive pluggable {{AffinityResolver}} responsible 
for selecting primary and backups based on the node attributes.

Need to suggest API for the pluggable {{AffinityResolver}} 

Summary: Affinity function should support assigning partition to subset 
of cluster nodes  (was: Affinity function should support selecting nodes based 
on node attributes)

> Affinity function should support assigning partition to subset of cluster 
> nodes
> ---
>
> Key: IGNITE-4378
> URL: https://issues.apache.org/jira/browse/IGNITE-4378
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Dmitriy Setrakyan
>Assignee: Alexei Scherbakov
> Fix For: 2.0
>
>
> Currently both default affinity function(AF) implementations randomly choose 
> primary partition among all topology nodes.
> This may not be enough to handle complex data placement scenarios without 
> implementing own AF.
> On example, some partitions can be assigned to more powerful hardware, or 
> limited to subset of cluster nodes due to ease of management or fault 
> tolerance scenarios.
> We should implement node filter, which will allow to choose subset of cluster 
> nodes to place primary and backup partitions.
> With already existing ability to filter backup nodes (using 
> {{AffinityBackupFilter}}) it will allow to implement different approaches to 
> data placement with Ignite without resorting to custom AF.
> It's also desirable to include a practical example of both topology filters 
> based on node attribute values.
> Proposed primary filter interface is below.
> {noformat}
> /**
>  * Allows partition placement to subset of cluster node.
>  *
>  * Backup nodes also will be assigned from the subset.
>  */
> public interface AffinityPrimaryFilter extends IgniteBiClosure List, List> {
> /**
>  * Return nodes allowed to contain given partition.
>  * @param partition Partition.
>  * @param currentTopologyNodes All nodes from current topology.
>  * @return Subset of nodes.
>  */
> @Override public List apply(Integer partition, 
> List currentTopologyNodes);
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4394) Web console: memory leak when dialog "Connection to Ignite Node is not established" on the screen

2016-12-07 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-4394:
--

 Summary: Web console: memory leak when dialog "Connection to 
Ignite Node is not established" on the screen
 Key: IGNITE-4394
 URL: https://issues.apache.org/jira/browse/IGNITE-4394
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Reporter: Pavel Konstantinov


I've noticed memory leak in case when dialog is opened during long period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4387) Incorrect Karaf OSGI feature for ignite-rest-http

2016-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15731300#comment-15731300
 ] 

ASF GitHub Bot commented on IGNITE-4387:


GitHub user SergiusSidorov opened a pull request:

https://github.com/apache/ignite/pull/1330

IGNITE-4387 Fix incorrect Karaf OSGI feature for ignite-rest-http

IGNITE-4387 Fix

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SergiusSidorov/ignite ignite-4387

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1330.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1330


commit 2b7e5766290c2f3ea666336b8ce86f8aaed38d1d
Author: Sergej Sidorov 
Date:   2016-12-08T06:32:00Z

IGNITE-4387 Fix incorrect Karaf OSGI feature for ignite-rest-http




> Incorrect Karaf OSGI feature for ignite-rest-http
> -
>
> Key: IGNITE-4387
> URL: https://issues.apache.org/jira/browse/IGNITE-4387
> Project: Ignite
>  Issue Type: Bug
>  Components: osgi, rest
>Affects Versions: 1.7
>Reporter: Sergej Sidorov
>Assignee: Sergej Sidorov
>Priority: Minor
>
> In accordance with the commit f177d4312c47 (IGNITE-3277 Replaced outdated 
> json-lib 2.4 to modern Jackson 2.7.5) dependencies in module ignite-rest-http 
> have been updated. But Karaf OSGI features config has not been updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-104) Need to enable thread-per-partition mode for atomic caches

2016-12-07 Thread Yakov Zhdanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yakov Zhdanov updated IGNITE-104:
-
Fix Version/s: 1.9

> Need to enable thread-per-partition mode for atomic caches
> --
>
> Key: IGNITE-104
> URL: https://issues.apache.org/jira/browse/IGNITE-104
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Dmitriy Setrakyan
>Priority: Critical
> Fix For: 1.9
>
>
> Currently messages are processed in any order for atomic caches. We can add 
> ordered processing by assigning a thread per partition. This way we will be 
> able to remove any locking as well.
> {{Cache.invoke()}} method would be able to invoke the EntryProcessor on 
> primary and backup independently. Right now we only invoke the EntryProcessor 
> only on the primary node and the send the computed value to the backup node, 
> which may be expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-104) Need to enable thread-per-partition mode for atomic caches

2016-12-07 Thread Yakov Zhdanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yakov Zhdanov reassigned IGNITE-104:


Assignee: Yakov Zhdanov

> Need to enable thread-per-partition mode for atomic caches
> --
>
> Key: IGNITE-104
> URL: https://issues.apache.org/jira/browse/IGNITE-104
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Dmitriy Setrakyan
>Assignee: Yakov Zhdanov
>Priority: Critical
> Fix For: 1.9
>
>
> Currently messages are processed in any order for atomic caches. We can add 
> ordered processing by assigning a thread per partition. This way we will be 
> able to remove any locking as well.
> {{Cache.invoke()}} method would be able to invoke the EntryProcessor on 
> primary and backup independently. Right now we only invoke the EntryProcessor 
> only on the primary node and the send the computed value to the backup node, 
> which may be expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4393) Need to remove code putting cache operations to sequence

2016-12-07 Thread Yakov Zhdanov (JIRA)
Yakov Zhdanov created IGNITE-4393:
-

 Summary: Need to remove code putting cache operations to sequence
 Key: IGNITE-4393
 URL: https://issues.apache.org/jira/browse/IGNITE-4393
 Project: Ignite
  Issue Type: Improvement
Reporter: Yakov Zhdanov
Assignee: Semen Boikov
 Fix For: 1.9


Currently Ignite guarantees that async operations started from the same thread 
are applied in the same order. This code will be literally useless once we move 
to thread per partition model.

org.apache.ignite.internal.processors.cache.GridCacheAdapter#toggleAsync
org.apache.ignite.internal.processors.cache.GridCacheAdapter#asyncOpAcquire
org.apache.ignite.internal.processors.cache.GridCacheAdapter#asyncOpRelease
org.apache.ignite.internal.processors.cache.GridCacheAdapter#asyncOpsSem
org.apache.ignite.internal.processors.cache.GridCacheAdapter#lastFut
org.apache.ignite.internal.processors.cache.GridCacheAdapter.FutureHolder 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4392) Web Console: Implement support of management role

2016-12-07 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-4392:


 Summary: Web Console: Implement support of management role
 Key: IGNITE-4392
 URL: https://issues.apache.org/jira/browse/IGNITE-4392
 Project: Ignite
  Issue Type: Task
  Components: wizards
Affects Versions: 1.8
Reporter: Alexey Kuznetsov
 Fix For: 1.9


In current implementation we have following roles: user and admins.
We need one more role: manager - this role will allow to view admin panel, but 
all other actions: delete users and giving admin rights to other users should 
be disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4391) Web Console: Implement grouping by company in Admin panel

2016-12-07 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-4391:


 Summary: Web Console: Implement grouping by company in Admin panel
 Key: IGNITE-4391
 URL: https://issues.apache.org/jira/browse/IGNITE-4391
 Project: Ignite
  Issue Type: Task
  Components: wizards
Affects Versions: 1.8
Reporter: Alexey Kuznetsov
 Fix For: 1.9


It will be useful to have different view on same data:
# list by name (current)
# group by company
# group by country



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-4317) Redesign Queries Screen

2016-12-07 Thread Alexey Kuznetsov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730937#comment-15730937
 ] 

Alexey Kuznetsov edited comment on IGNITE-4317 at 12/8/16 3:31 AM:
---

Dmitriy, good progress.
But I have following notes:
# Make sections headers more "noticeable"
# Add page size dropdown for  "scan" queries.
# Instead of checkbox "Show _key and _val" for  "scan" queries show checkbox 
"Show Key and Value classes" (and change logic accordingly).
# For "scan" queries instead of "Show query" link show Label "Showing results 
for scan of _cache_name_ "
# Rework "Fetch firts page only" to dropdown with items "Unlimited, First 100, 
First 500, First 1000"


was (Author: kuaw26):
Dmitriy, good progress.
But I have following notes:
# Make sections headers more "noticeable"
# Add page size dropdown for  "scan" queries.
# Instead of checkbox "Show _key and _val" for  "scan" queries show checkbox 
"Show Key and Value classes" (and change logic accordingly).
# For "scan" queries instead of "Show query" link show Label "Showing results 
for scan of _cache_name_ "
#Rework "Fetch firts page only" to dropdown with items "Unlimited, First 100, 
First 500, First 1000"

> Redesign Queries Screen
> ---
>
> Key: IGNITE-4317
> URL: https://issues.apache.org/jira/browse/IGNITE-4317
> Project: Ignite
>  Issue Type: Task
>Reporter: Dmitriy Shabalin
>Assignee: Dmitriy Shabalin
>
> Simplify queries screen to implement divide execute action to SCAN and QUERY 
> actions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-4317) Redesign Queries Screen

2016-12-07 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reassigned IGNITE-4317:


Assignee: Dmitriy Shabalin  (was: Alexey Kuznetsov)

Dmitriy, good progress.
But I have following notes:
# Make sections headers more "noticeable"
# Add page size dropdown for  "scan" queries.
# Instead of checkbox "Show _key and _val" for  "scan" queries show checkbox 
"Show Key and Value classes" (and change logic accordingly).
# For "scan" queries instead of "Show query" link show Label "Showing results 
for scan of _cache_name_ "
#Rework "Fetch firts page only" to dropdown with items "Unlimited, First 100, 
First 500, First 1000"

> Redesign Queries Screen
> ---
>
> Key: IGNITE-4317
> URL: https://issues.apache.org/jira/browse/IGNITE-4317
> Project: Ignite
>  Issue Type: Task
>Reporter: Dmitriy Shabalin
>Assignee: Dmitriy Shabalin
>
> Simplify queries screen to implement divide execute action to SCAN and QUERY 
> actions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4383) Update section on binary marshaller in Apache Ignite Readme.io docs

2016-12-07 Thread Denis Magda (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730845#comment-15730845
 ] 

Denis Magda commented on IGNITE-4383:
-

[~al.psc], please assign the documentation on me for a final review.

> Update section on binary marshaller in Apache Ignite Readme.io docs
> ---
>
> Key: IGNITE-4383
> URL: https://issues.apache.org/jira/browse/IGNITE-4383
> Project: Ignite
>  Issue Type: Task
>  Components: binary, documentation
>Reporter: Alexander Paschenko
>Assignee: Alexander Paschenko
> Fix For: 1.8
>
>
> Section about default behavior changes should be updated and one about 
> identity resolvers has to be added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-4140) KafkaStreamer should use tuple extractor instead of decoders

2016-12-07 Thread Roman Shtykh (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730440#comment-15730440
 ] 

Roman Shtykh edited comment on IGNITE-4140 at 12/8/16 1:46 AM:
---

[~avinogradov] My concern is -- we create the next 1.9 release from the master 
branch and this modification will go into 1.9 release instead of 2.0. Is it a 
valid concern?
There's a discussion on the dev list, and probably this has to go to 2.0 branch.


was (Author: roman_s):
[~avinogradov] My concern is -- we create the next 1.9 release from the master 
branch and this modification will go into 1.9 release instead of 2.0. Is it a 
valid concern?

> KafkaStreamer should use tuple extractor instead of decoders
> 
>
> Key: IGNITE-4140
> URL: https://issues.apache.org/jira/browse/IGNITE-4140
> Project: Ignite
>  Issue Type: Improvement
>  Components: streaming
>Affects Versions: 1.7
>Reporter: Valentin Kulichenko
>Assignee: Anil
>  Labels: patch-available
> Fix For: 1.9
>
>
> Current design of {{KafkaStreamer}} looks incorrect to me. In particular, it 
> extends {{StreamAdapter}}, but ignores tuple extractors provided there and 
> uses native Kafka decoders instead. This for example makes impossible to 
> produce several entries from one message, like it can be done via 
> {{StreamMultipleTupleExtractor}} in other streamers.
> To fix this, we should:
> # Declare the {{KafkaStreamer}} like this:
> {code}
> KafkaStreamer extends StreamAdapter, 
> K, V>
> {code}
> # Remove {{keyDecoder}} and {{valDecoder}} in favor of tuple extractors.
> # Instead of doing {{getStreamer().addData(...)}} directly, call 
> {{addMessage(...)}} method providing the raw message consumed from Kafka 
> ({{MessageAndMetadata}}). This method will make sure that 
> configured extractor is invoked and that all entries are added to 
> {{IgniteDataStreamer}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-4140) KafkaStreamer should use tuple extractor instead of decoders

2016-12-07 Thread Roman Shtykh (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730440#comment-15730440
 ] 

Roman Shtykh edited comment on IGNITE-4140 at 12/8/16 1:46 AM:
---

[~avinogradov] My concern is -- we create the next 1.9 release from the master 
branch and this modification will go into 1.9 release instead of 2.0. Is it a 
valid concern?
There's a discussion on the dev list, and probably this has to go to 2.0 branch.


was (Author: roman_s):
[~avinogradov] My concern is -- we create the next 1.9 release from the master 
branch and this modification will go into 1.9 release instead of 2.0. Is it a 
valid concern?
There's a discussion on the dev list, and probably this has to go to 2.0 branch.

> KafkaStreamer should use tuple extractor instead of decoders
> 
>
> Key: IGNITE-4140
> URL: https://issues.apache.org/jira/browse/IGNITE-4140
> Project: Ignite
>  Issue Type: Improvement
>  Components: streaming
>Affects Versions: 1.7
>Reporter: Valentin Kulichenko
>Assignee: Anil
>  Labels: patch-available
> Fix For: 1.9
>
>
> Current design of {{KafkaStreamer}} looks incorrect to me. In particular, it 
> extends {{StreamAdapter}}, but ignores tuple extractors provided there and 
> uses native Kafka decoders instead. This for example makes impossible to 
> produce several entries from one message, like it can be done via 
> {{StreamMultipleTupleExtractor}} in other streamers.
> To fix this, we should:
> # Declare the {{KafkaStreamer}} like this:
> {code}
> KafkaStreamer extends StreamAdapter, 
> K, V>
> {code}
> # Remove {{keyDecoder}} and {{valDecoder}} in favor of tuple extractors.
> # Instead of doing {{getStreamer().addData(...)}} directly, call 
> {{addMessage(...)}} method providing the raw message consumed from Kafka 
> ({{MessageAndMetadata}}). This method will make sure that 
> configured extractor is invoked and that all entries are added to 
> {{IgniteDataStreamer}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4337) Introduce persistence interface to allow build reliable persistence plugins

2016-12-07 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730553#comment-15730553
 ] 

Dmitriy Setrakyan commented on IGNITE-4337:
---

One more comment. Is the name {{IgniteWriteAheadLogManager}} going to be 
confusing? We already have *managers* in Ignite which serve a different 
purpose. How about {{IgniteWriteAheadLogStore}}?

> Introduce persistence interface to allow build reliable persistence plugins
> ---
>
> Key: IGNITE-4337
> URL: https://issues.apache.org/jira/browse/IGNITE-4337
> Project: Ignite
>  Issue Type: Sub-task
>  Components: general
>Reporter: Alexey Goncharuk
> Fix For: 2.0
>
>
> With page memory interface introduced, it may be possible to build a 
> persistence layer around this architecture. I think we should move the 
> PageMemory interface itself to {{org.apache.ignite.plugin.storage}} package 
> and introduce the following interface to allow other components to log it's 
> activity in crash-resistant way:
> {code}
> /**
>  *
>  */
> public interface IgniteWriteAheadLogManager extends GridCacheSharedManager {
> /**
>  * @return {@code true} If we have to always write full pages.
>  */
> public boolean isAlwaysWriteFullPages();
> /**
>  * @return {@code true} if WAL will perform fair syncs on fsync call.
>  */
> public boolean isFullSync();
> /**
>  * Resumes logging after start. When WAL manager is started, it will skip 
> logging any updates until this
>  * method is called to avoid logging changes induced by the state restore 
> procedure.
>  */
> public void resumeLogging(WALPointer lastWrittenPtr) throws 
> IgniteCheckedException;
> /**
>  * Appends the given log entry to the write-ahead log.
>  *
>  * @param entry entry to log.
>  * @return WALPointer that may be passed to {@link #fsync(WALPointer)} 
> method to make sure the record is
>  *  written to the log.
>  * @throws IgniteCheckedException If failed to construct log entry.
>  * @throws StorageException If IO error occurred while writing log entry.
>  */
> public WALPointer log(WALRecord entry) throws IgniteCheckedException, 
> StorageException;
> /**
>  * Makes sure that all log entries written to the log up until the 
> specified pointer are actually persisted to
>  * the underlying storage.
>  *
>  * @param ptr Optional pointer to sync. If {@code null}, will sync up to 
> the latest record.
>  * @throws IgniteCheckedException If
>  * @throws StorageException
>  */
> public void fsync(WALPointer ptr) throws IgniteCheckedException, 
> StorageException;
> /**
>  * Invoke this method to iterate over the written log entries.
>  *
>  * @param start Optional WAL pointer from which to start iteration.
>  * @return Records iterator.
>  * @throws IgniteException If failed to start iteration.
>  * @throws StorageException If IO error occurred while reading WAL 
> entries.
>  */
> public WALIterator replay(WALPointer start) throws 
> IgniteCheckedException, StorageException;
> /**
>  * Gives a hint to WAL manager to clear entries logged before the given 
> pointer. Some entries before the
>  * the given pointer will be kept because there is a configurable WAL 
> history size. Those entries may be used
>  * for partial partition rebalancing.
>  *
>  * @param ptr Pointer for which it is safe to clear the log.
>  * @return Number of deleted WAL segments.
>  */
> public int truncate(WALPointer ptr);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4337) Introduce persistence interface to allow build reliable persistence plugins

2016-12-07 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730490#comment-15730490
 ] 

Dmitriy Setrakyan commented on IGNITE-4337:
---

BTW, please also add a top-level javadoc whenever you get a chance.

> Introduce persistence interface to allow build reliable persistence plugins
> ---
>
> Key: IGNITE-4337
> URL: https://issues.apache.org/jira/browse/IGNITE-4337
> Project: Ignite
>  Issue Type: Sub-task
>  Components: general
>Reporter: Alexey Goncharuk
> Fix For: 2.0
>
>
> With page memory interface introduced, it may be possible to build a 
> persistence layer around this architecture. I think we should move the 
> PageMemory interface itself to {{org.apache.ignite.plugin.storage}} package 
> and introduce the following interface to allow other components to log it's 
> activity in crash-resistant way:
> {code}
> /**
>  *
>  */
> public interface IgniteWriteAheadLogManager extends GridCacheSharedManager {
> /**
>  * @return {@code true} If we have to always write full pages.
>  */
> public boolean isAlwaysWriteFullPages();
> /**
>  * @return {@code true} if WAL will perform fair syncs on fsync call.
>  */
> public boolean isFullSync();
> /**
>  * Resumes logging after start. When WAL manager is started, it will skip 
> logging any updates until this
>  * method is called to avoid logging changes induced by the state restore 
> procedure.
>  */
> public void resumeLogging(WALPointer lastWrittenPtr) throws 
> IgniteCheckedException;
> /**
>  * Appends the given log entry to the write-ahead log.
>  *
>  * @param entry entry to log.
>  * @return WALPointer that may be passed to {@link #fsync(WALPointer)} 
> method to make sure the record is
>  *  written to the log.
>  * @throws IgniteCheckedException If failed to construct log entry.
>  * @throws StorageException If IO error occurred while writing log entry.
>  */
> public WALPointer log(WALRecord entry) throws IgniteCheckedException, 
> StorageException;
> /**
>  * Makes sure that all log entries written to the log up until the 
> specified pointer are actually persisted to
>  * the underlying storage.
>  *
>  * @param ptr Optional pointer to sync. If {@code null}, will sync up to 
> the latest record.
>  * @throws IgniteCheckedException If
>  * @throws StorageException
>  */
> public void fsync(WALPointer ptr) throws IgniteCheckedException, 
> StorageException;
> /**
>  * Invoke this method to iterate over the written log entries.
>  *
>  * @param start Optional WAL pointer from which to start iteration.
>  * @return Records iterator.
>  * @throws IgniteException If failed to start iteration.
>  * @throws StorageException If IO error occurred while reading WAL 
> entries.
>  */
> public WALIterator replay(WALPointer start) throws 
> IgniteCheckedException, StorageException;
> /**
>  * Gives a hint to WAL manager to clear entries logged before the given 
> pointer. Some entries before the
>  * the given pointer will be kept because there is a configurable WAL 
> history size. Those entries may be used
>  * for partial partition rebalancing.
>  *
>  * @param ptr Pointer for which it is safe to clear the log.
>  * @return Number of deleted WAL segments.
>  */
> public int truncate(WALPointer ptr);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4337) Introduce persistence interface to allow build reliable persistence plugins

2016-12-07 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730488#comment-15730488
 ] 

Dmitriy Setrakyan commented on IGNITE-4337:
---

Alexey, the interface looks good and I agree with placing it in 
{{org.apache.ignite.plugin.storage}} package. 

> Introduce persistence interface to allow build reliable persistence plugins
> ---
>
> Key: IGNITE-4337
> URL: https://issues.apache.org/jira/browse/IGNITE-4337
> Project: Ignite
>  Issue Type: Sub-task
>  Components: general
>Reporter: Alexey Goncharuk
> Fix For: 2.0
>
>
> With page memory interface introduced, it may be possible to build a 
> persistence layer around this architecture. I think we should move the 
> PageMemory interface itself to {{org.apache.ignite.plugin.storage}} package 
> and introduce the following interface to allow other components to log it's 
> activity in crash-resistant way:
> {code}
> /**
>  *
>  */
> public interface IgniteWriteAheadLogManager extends GridCacheSharedManager {
> /**
>  * @return {@code true} If we have to always write full pages.
>  */
> public boolean isAlwaysWriteFullPages();
> /**
>  * @return {@code true} if WAL will perform fair syncs on fsync call.
>  */
> public boolean isFullSync();
> /**
>  * Resumes logging after start. When WAL manager is started, it will skip 
> logging any updates until this
>  * method is called to avoid logging changes induced by the state restore 
> procedure.
>  */
> public void resumeLogging(WALPointer lastWrittenPtr) throws 
> IgniteCheckedException;
> /**
>  * Appends the given log entry to the write-ahead log.
>  *
>  * @param entry entry to log.
>  * @return WALPointer that may be passed to {@link #fsync(WALPointer)} 
> method to make sure the record is
>  *  written to the log.
>  * @throws IgniteCheckedException If failed to construct log entry.
>  * @throws StorageException If IO error occurred while writing log entry.
>  */
> public WALPointer log(WALRecord entry) throws IgniteCheckedException, 
> StorageException;
> /**
>  * Makes sure that all log entries written to the log up until the 
> specified pointer are actually persisted to
>  * the underlying storage.
>  *
>  * @param ptr Optional pointer to sync. If {@code null}, will sync up to 
> the latest record.
>  * @throws IgniteCheckedException If
>  * @throws StorageException
>  */
> public void fsync(WALPointer ptr) throws IgniteCheckedException, 
> StorageException;
> /**
>  * Invoke this method to iterate over the written log entries.
>  *
>  * @param start Optional WAL pointer from which to start iteration.
>  * @return Records iterator.
>  * @throws IgniteException If failed to start iteration.
>  * @throws StorageException If IO error occurred while reading WAL 
> entries.
>  */
> public WALIterator replay(WALPointer start) throws 
> IgniteCheckedException, StorageException;
> /**
>  * Gives a hint to WAL manager to clear entries logged before the given 
> pointer. Some entries before the
>  * the given pointer will be kept because there is a configurable WAL 
> history size. Those entries may be used
>  * for partial partition rebalancing.
>  *
>  * @param ptr Pointer for which it is safe to clear the log.
>  * @return Number of deleted WAL segments.
>  */
> public int truncate(WALPointer ptr);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-4140) KafkaStreamer should use tuple extractor instead of decoders

2016-12-07 Thread Roman Shtykh (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730440#comment-15730440
 ] 

Roman Shtykh edited comment on IGNITE-4140 at 12/8/16 12:35 AM:


[~avinogradov] My concern is -- we create the next 1.9 release from the master 
branch and this modification will go into 1.9 release instead of 2.0. Is it a 
valid concern?


was (Author: roman_s):
@avinogradov My concern is -- we create the next 1.9 release from the master 
branch and this modification will go into 1.9 release instead of 2.0. Is it a 
valid concern?

> KafkaStreamer should use tuple extractor instead of decoders
> 
>
> Key: IGNITE-4140
> URL: https://issues.apache.org/jira/browse/IGNITE-4140
> Project: Ignite
>  Issue Type: Improvement
>  Components: streaming
>Affects Versions: 1.7
>Reporter: Valentin Kulichenko
>Assignee: Anil
>  Labels: patch-available
> Fix For: 1.9
>
>
> Current design of {{KafkaStreamer}} looks incorrect to me. In particular, it 
> extends {{StreamAdapter}}, but ignores tuple extractors provided there and 
> uses native Kafka decoders instead. This for example makes impossible to 
> produce several entries from one message, like it can be done via 
> {{StreamMultipleTupleExtractor}} in other streamers.
> To fix this, we should:
> # Declare the {{KafkaStreamer}} like this:
> {code}
> KafkaStreamer extends StreamAdapter, 
> K, V>
> {code}
> # Remove {{keyDecoder}} and {{valDecoder}} in favor of tuple extractors.
> # Instead of doing {{getStreamer().addData(...)}} directly, call 
> {{addMessage(...)}} method providing the raw message consumed from Kafka 
> ({{MessageAndMetadata}}). This method will make sure that 
> configured extractor is invoked and that all entries are added to 
> {{IgniteDataStreamer}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4140) KafkaStreamer should use tuple extractor instead of decoders

2016-12-07 Thread Roman Shtykh (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730440#comment-15730440
 ] 

Roman Shtykh commented on IGNITE-4140:
--

@avinogradov My concern is -- we create the next 1.9 release from the master 
branch and this modification will go into 1.9 release instead of 2.0. Is it a 
valid concern?

> KafkaStreamer should use tuple extractor instead of decoders
> 
>
> Key: IGNITE-4140
> URL: https://issues.apache.org/jira/browse/IGNITE-4140
> Project: Ignite
>  Issue Type: Improvement
>  Components: streaming
>Affects Versions: 1.7
>Reporter: Valentin Kulichenko
>Assignee: Anil
>  Labels: patch-available
> Fix For: 1.9
>
>
> Current design of {{KafkaStreamer}} looks incorrect to me. In particular, it 
> extends {{StreamAdapter}}, but ignores tuple extractors provided there and 
> uses native Kafka decoders instead. This for example makes impossible to 
> produce several entries from one message, like it can be done via 
> {{StreamMultipleTupleExtractor}} in other streamers.
> To fix this, we should:
> # Declare the {{KafkaStreamer}} like this:
> {code}
> KafkaStreamer extends StreamAdapter, 
> K, V>
> {code}
> # Remove {{keyDecoder}} and {{valDecoder}} in favor of tuple extractors.
> # Instead of doing {{getStreamer().addData(...)}} directly, call 
> {{addMessage(...)}} method providing the raw message consumed from Kafka 
> ({{MessageAndMetadata}}). This method will make sure that 
> configured extractor is invoked and that all entries are added to 
> {{IgniteDataStreamer}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3959) SQL: Optimize Date\Time fields conversion.

2016-12-07 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730442#comment-15730442
 ] 

Dmitriy Setrakyan commented on IGNITE-3959:
---

Sergi, if that's the case, then you are the best person to specify how.

> SQL: Optimize Date\Time fields conversion.
> --
>
> Key: IGNITE-3959
> URL: https://issues.apache.org/jira/browse/IGNITE-3959
> Project: Ignite
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 1.6
>Reporter: Andrew Mashenkov
>  Labels: newbie, performance
> Fix For: 2.0
>
>
> SqlFieldsQueries slowdown on date\time fields processing due to ineffective 
> java.util.Calendar usage for date manipulation by H2 database.
> Good point to start is IgniteH2Indexing.wrap() method. Make optimization for 
> types DATE and TIME as it already done for TIMESTAMP type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (IGNITE-4009) OptimizedObjectOutputStream checks for error message of removed IgniteEmailProcessor

2016-12-07 Thread Roman Shtykh (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shtykh closed IGNITE-4009.


> OptimizedObjectOutputStream checks for error message of removed 
> IgniteEmailProcessor
> 
>
> Key: IGNITE-4009
> URL: https://issues.apache.org/jira/browse/IGNITE-4009
> Project: Ignite
>  Issue Type: Bug
>Reporter: Roman Shtykh
>Assignee: Roman Shtykh
>Priority: Minor
> Fix For: 1.9
>
>
> I wonder if we need a catch part with {{if 
> (CONVERTED_ERR.contains(t.getMessage()))}} of {{writeObjectOverride(...)}} at 
> all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3469) Get rid of deprecated APIs and entities

2016-12-07 Thread Denis Magda (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730079#comment-15730079
 ] 

Denis Magda commented on IGNITE-3469:
-

Refer to this dev list discussion for more details
http://apache-ignite-developers.2346864.n4.nabble.com/Major-release-2-0-and-compilation-td12793.html

> Get rid of deprecated APIs and entities
> ---
>
> Key: IGNITE-3469
> URL: https://issues.apache.org/jira/browse/IGNITE-3469
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexey Goncharuk
>  Labels: important
> Fix For: 2.0
>
>
> There are more than 220 deprecated elements in Ignite code kept for the sake 
> of backward compatibility. We need to cleanup the code for Ignite 2.0 release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-3292) Yardstick: add logging of preloading progress

2016-12-07 Thread Sergey Kozlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729490#comment-15729490
 ] 

Sergey Kozlov edited comment on IGNITE-3292 at 12/7/16 6:48 PM:


[~oleg-ostanin]
Thanks for contribution!
Could you improve the logging and do not print out message {{(+0)}} for caches 
without updates?


was (Author: skozlov):
[~oleg-ostanin]]
Thanks for contribution!
Could you improve the logging and do not print out message {{(+0)}} for caches 
without updates?

> Yardstick: add logging of preloading progress
> -
>
> Key: IGNITE-3292
> URL: https://issues.apache.org/jira/browse/IGNITE-3292
> Project: Ignite
>  Issue Type: Wish
>  Components: yardstick
>Reporter: Ksenia Rybakova
>Assignee: Oleg Ostanin
> Fix For: 1.9
>
>
> Please, add logging of preloading progress. This is really useful when we 
> load a lot of entries or they are large. 
> For instance: "Preloading: 200 of 1000 loaded".
> Also adding an option that controls frequency of such output makes sense. For 
> instance, this might be a step (number of entries loaded) - if entries are 
> large, we set small step, if entries are integers,  the step will be large.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-3292) Yardstick: add logging of preloading progress

2016-12-07 Thread Sergey Kozlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729490#comment-15729490
 ] 

Sergey Kozlov edited comment on IGNITE-3292 at 12/7/16 6:48 PM:


[~oleg-ostanin]]
Thanks for contribution!
Could you improve the logging and do not print out message {{(+0)}} for caches 
without updates?


was (Author: skozlov):
[~oleg-ostanin]]
Thanks for contribution!
Could you improve the logging and do not print out message {{(+0)}} for caches 
without updates

> Yardstick: add logging of preloading progress
> -
>
> Key: IGNITE-3292
> URL: https://issues.apache.org/jira/browse/IGNITE-3292
> Project: Ignite
>  Issue Type: Wish
>  Components: yardstick
>Reporter: Ksenia Rybakova
>Assignee: Oleg Ostanin
> Fix For: 1.9
>
>
> Please, add logging of preloading progress. This is really useful when we 
> load a lot of entries or they are large. 
> For instance: "Preloading: 200 of 1000 loaded".
> Also adding an option that controls frequency of such output makes sense. For 
> instance, this might be a step (number of entries loaded) - if entries are 
> large, we set small step, if entries are integers,  the step will be large.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4003) Slow or faulty client can stall the whole cluster.

2016-12-07 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729499#comment-15729499
 ] 

Andrey Gura commented on IGNITE-4003:
-

Minor fixes. New PR created https://github.com/apache/ignite/pull/1328.

{{TcpCommunicationSpi.ConnectGate}} should be optimized because now it has 
single contention point (mutex).

Waiting for TC. 

Please, review.

> Slow or faulty client can stall the whole cluster.
> --
>
> Key: IGNITE-4003
> URL: https://issues.apache.org/jira/browse/IGNITE-4003
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, general
>Affects Versions: 1.7
>Reporter: Vladimir Ozerov
>Assignee: Andrey Gura
>Priority: Critical
> Fix For: 2.0
>
>
> Steps to reproduce:
> 1) Start two server nodes and some data to cache.
> 2) Start a client from Docker subnet, which is not visible from the outside. 
> Client will join the cluster.
> 3) Try to put something to cache or start another node to force rabalance.
> Cluster is stuck at this moment. Root cause - servers are constantly trying 
> to establish outgoing connection to the client, but fail as Docker subnet is 
> not visible from the outside. It may stop virtually all cluster operations.
> Typical thread dump:
> {code}
> org.apache.ignite.IgniteCheckedException: Failed to send message (node may 
> have left the grid or TCP connection cannot be established due to firewall 
> issues) [node=TcpDiscoveryNode [id=a15d74c2-1ec2-4349-9640-aeacd70d8714, 
> addrs=[127.0.0.1, 172.17.0.6], sockAddrs=[/127.0.0.1:0, /127.0.0.1:0, 
> /172.17.0.6:0], discPort=0, order=7241, intOrder=3707, 
> lastExchangeTime=1474096941045, loc=false, ver=1.5.23#20160526-sha1:259146da, 
> isClient=true], topic=T4 [topic=TOPIC_CACHE, 
> id1=949732fd-1360-3a58-8d9e-0ff6ea6182cc, 
> id2=a15d74c2-1ec2-4349-9640-aeacd70d8714, id3=2], msg=GridContinuousMessage 
> [type=MSG_EVT_NOTIFICATION, routineId=7e13c48e-6933-48b2-9f15-8d92007930db, 
> data=null, futId=null], policy=2]
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1129)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.sendOrderedMessage(GridIoManager.java:1347)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1227)
>  ~[ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1198)
>  ~[ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1180)
>  ~[ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendNotification(GridContinuousProcessor.java:841)
>  ~[ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.addNotification(GridContinuousProcessor.java:800)
>  ~[ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.onEntryUpdate(CacheContinuousQueryHandler.java:787)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.access$700(CacheContinuousQueryHandler.java:91)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler$1.onEntryUpdated(CacheContinuousQueryHandler.java:412)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager.onEntryUpdated(CacheContinuousQueryManager.java:343)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager.onEntryUpdated(CacheContinuousQueryManager.java:250)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:3476)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtForceKeysFuture$MiniFuture.onResult(GridDhtForceKeysFuture.java:548)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtForceKeysFuture.onResult(GridDhtForceKeysFuture.java:207)
>  [ignite-core-1.5.23.jar:1.5.23]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.processForceKeyResponse(GridDhtPreloader.java:636)
>  

[jira] [Commented] (IGNITE-3292) Yardstick: add logging of preloading progress

2016-12-07 Thread Sergey Kozlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729490#comment-15729490
 ] 

Sergey Kozlov commented on IGNITE-3292:
---

[~oleg-ostanin]]
Thanks for contribution!
Could you improve the logging and do not print out message {{(+0)}} for caches 
without updates

> Yardstick: add logging of preloading progress
> -
>
> Key: IGNITE-3292
> URL: https://issues.apache.org/jira/browse/IGNITE-3292
> Project: Ignite
>  Issue Type: Wish
>  Components: yardstick
>Reporter: Ksenia Rybakova
>Assignee: Oleg Ostanin
> Fix For: 1.9
>
>
> Please, add logging of preloading progress. This is really useful when we 
> load a lot of entries or they are large. 
> For instance: "Preloading: 200 of 1000 loaded".
> Also adding an option that controls frequency of such output makes sense. For 
> instance, this might be a step (number of entries loaded) - if entries are 
> large, we set small step, if entries are integers,  the step will be large.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2358) Write behind store swallows exception trace in case of error

2016-12-07 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729478#comment-15729478
 ] 

Andrey Gura commented on IGNITE-2358:
-

{{toString()}} method added to all non-test cache store implementations except 
of {{PlatformDotNetCacheStore}}. PR created: 
https://github.com/apache/ignite/pull/1329

Please review.

> Write behind store swallows exception trace in case of error
> 
>
> Key: IGNITE-2358
> URL: https://issues.apache.org/jira/browse/IGNITE-2358
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Andrey Gura
> Fix For: 2.0
>
>
> See {{GridCacheWriteBehindStore:708}}:
> {code}
> LT.warn(log, e, "Unable to update underlying store: " + store);
> {code}
> The exception trace is never printed out in case of warnings, therefore user 
> sees only message, which is usually not enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4264) Offheap cache entries statistics is broken in master

2016-12-07 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729414#comment-15729414
 ] 

Andrey Gura commented on IGNITE-4264:
-

Hi, [~wmz7year]. I've looked at your PR and I think that this fix isn't 
correct. Your cache can be created on some client or server node but this node 
can be filtered out and doesn't own any data. In order to get all nodes that 
really have cache data you should get affinity nodes using 
{{ClusterGroup.forDataNodes(String cacheName)}} method.

Also I'm not sure that filtering out clients is good idea ({{forServers() 
call}}) because client node can have local cache and own metrics.

Could you please take in account this comments and provide new fix?

Thanks.

> Offheap cache entries statistics is broken in master
> 
>
> Key: IGNITE-4264
> URL: https://issues.apache.org/jira/browse/IGNITE-4264
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Andrew Mashenkov
>Assignee: wmz7year
> Fix For: 2.0
>
> Attachments: Test.java
>
>
> Reproducer is attached. Test is fine on 1.7 release branch, but fails in 
> master
> cache.metrics().getOffHeapPrimaryEntriesCount() returns wrong number. Also it 
> seems the number grows with increasing number of clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4358) Better error reporting in case of unmarshallable classes.

2016-12-07 Thread Alexei Scherbakov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729240#comment-15729240
 ] 

Alexei Scherbakov commented on IGNITE-4358:
---

Rohit,

Thanks for contribution.

I suppose it's better to do null checking before sending a message with empty 
runnable and report it to sender.

You also should change the error message accordingly, something like

Trying to execute a null closure. Make sure the closure is not assignable from 
classes listed in MarshallerExclusions



> Better error reporting in case of unmarshallable classes.
> -
>
> Key: IGNITE-4358
> URL: https://issues.apache.org/jira/browse/IGNITE-4358
> Project: Ignite
>  Issue Type: Improvement
>  Components: compute, messaging, newbie
>Affects Versions: 1.6
>Reporter: Alexei Scherbakov
>Assignee: Rohit Mohta
>Priority: Trivial
>  Labels: newbie
> Fix For: 2,0
>
> Attachments: IGNITE-4358-Exceptionlog-05Dec2016.txt, 
> IGNITE-4358-GridClosureProcessor-05Dec2016.patch, PureIgniteRunTest.java
>
>
> When trying to execute Thread's derived class implementing IgniteRunnable 
> using compute API, it silently serializes to null because Thread 
> serialization is prohibited in MarshallerExclusions and throws NPE on 
> executing node.
> We need to throw more informative exception for such case.
> Reproducer in the attachment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4337) Introduce persistence interface to allow build reliable persistence plugins

2016-12-07 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-4337:
-
Description: 
With page memory interface introduced, it may be possible to build a 
persistence layer around this architecture. I think we should move the 
PageMemory interface itself to {{org.apache.ignite.plugin.storage}} package and 
introduce the following interface to allow other components to log it's 
activity in crash-resistant way:

{code}
/**
 *
 */
public interface IgniteWriteAheadLogManager extends GridCacheSharedManager {
/**
 * @return {@code true} If we have to always write full pages.
 */
public boolean isAlwaysWriteFullPages();

/**
 * @return {@code true} if WAL will perform fair syncs on fsync call.
 */
public boolean isFullSync();

/**
 * Resumes logging after start. When WAL manager is started, it will skip 
logging any updates until this
 * method is called to avoid logging changes induced by the state restore 
procedure.
 */
public void resumeLogging(WALPointer lastWrittenPtr) throws 
IgniteCheckedException;

/**
 * Appends the given log entry to the write-ahead log.
 *
 * @param entry entry to log.
 * @return WALPointer that may be passed to {@link #fsync(WALPointer)} 
method to make sure the record is
 *  written to the log.
 * @throws IgniteCheckedException If failed to construct log entry.
 * @throws StorageException If IO error occurred while writing log entry.
 */
public WALPointer log(WALRecord entry) throws IgniteCheckedException, 
StorageException;

/**
 * Makes sure that all log entries written to the log up until the 
specified pointer are actually persisted to
 * the underlying storage.
 *
 * @param ptr Optional pointer to sync. If {@code null}, will sync up to 
the latest record.
 * @throws IgniteCheckedException If
 * @throws StorageException
 */
public void fsync(WALPointer ptr) throws IgniteCheckedException, 
StorageException;

/**
 * Invoke this method to iterate over the written log entries.
 *
 * @param start Optional WAL pointer from which to start iteration.
 * @return Records iterator.
 * @throws IgniteException If failed to start iteration.
 * @throws StorageException If IO error occurred while reading WAL entries.
 */
public WALIterator replay(WALPointer start) throws IgniteCheckedException, 
StorageException;

/**
 * Gives a hint to WAL manager to clear entries logged before the given 
pointer. Some entries before the
 * the given pointer will be kept because there is a configurable WAL 
history size. Those entries may be used
 * for partial partition rebalancing.
 *
 * @param ptr Pointer for which it is safe to clear the log.
 * @return Number of deleted WAL segments.
 */
public int truncate(WALPointer ptr);
}
{code}

  was:
If page memory interface is introduced, it may be possible to build a 
persistence layer around this architecture. I think we should add some form of 
persistence logging to allow us build a crash-resistant system in future.

Something like
{code}
public interface IgnitePersistenceLogManager extends GridCacheSharedManager {
/**
 * @return {@code true} If we have to always write full pages.
 */
public boolean isAlwaysWriteFullPages();

/**
 * @return {@code true} if persistence manager will perform fair syncs on 
fsync call.
 */
public boolean isFullSync();

/**
 * Resumes logging after start. When persistence manager is started, it 
will skip logging any updates until this
 * method is called to avoid logging changes induced by the state restore 
procedure.
 */
public void resumeLogging(LogPointer lastWrittenPtr) throws 
IgniteCheckedException;

/**
 * Appends the given log entry to the write-ahead log.
 *
 * @param entry entry to log.
 * @return LogPointer that may be passed to {@link #fsync(LogPointer)} 
method to make sure the record is
 *  written to the log.
 * @throws IgniteCheckedException If failed to construct log entry.
 * @throws StorageException If IO error occurred while writing log entry.
 */
public LogPointer log(LogRecord entry) throws IgniteCheckedException, 
StorageException;

/**
 * Makes sure that all log entries written to the log up until the 
specified pointer are actually persisted to
 * the underlying storage.
 *
 * @param ptr Optional pointer to sync. If {@code null}, will sync up to 
the latest record.
 * @throws IgniteCheckedException If
 * @throws StorageException
 */
public void fsync(LogPointer ptr) throws IgniteCheckedException, 
StorageException;

/**
 * Invoke this method to iterate over the written log entries.
 *
 * @param start Optional log pointer 

[jira] [Commented] (IGNITE-4046) C++: Support DML API

2016-12-07 Thread Igor Sapego (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729229#comment-15729229
 ] 

Igor Sapego commented on IGNITE-4046:
-

Added test.

> C++: Support DML API
> 
>
> Key: IGNITE-4046
> URL: https://issues.apache.org/jira/browse/IGNITE-4046
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Denis Magda
>Assignee: Igor Sapego
>  Labels: roadmap
> Fix For: 2.0
>
>
> Ignite's Java component will provide support for DML soon (IGNITE-2294). At 
> she same time DML will be supported at the level of ODBC and JDBC drivers.
> As the next step we should include the similar functionality into Ignite.C++ 
> by doing the following:
> - Implement DML API;
> - Enhance {{query_example.cpp}} by doing INSERTs instead of cache.puts and 
> adding UPDATE and DELETE operation examples.
> - Add documentation to Ignite.C++ readme.io covering the feature. Most like 
> most of the content can be take from the general documentation when this 
> ticket IGNITE-4018 is ready.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4379) SQL: Local SQL field query broken in master

2016-12-07 Thread Andrew Mashenkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729205#comment-15729205
 ] 

Andrew Mashenkov commented on IGNITE-4379:
--

Tests looks good.

> SQL: Local SQL field query broken in master
> ---
>
> Key: IGNITE-4379
> URL: https://issues.apache.org/jira/browse/IGNITE-4379
> Project: Ignite
>  Issue Type: Bug
>  Components: SQL
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
> Fix For: 2.0
>
>
> Local SQL field query returns entries from backup partition.
> For now query context is being cleared in 
> IgniteH2Indexing.queryLocalSqlFields() method before query is really executed.
> It seems, GridQueryFieldsResultAdapter should restore query context before 
> query run in iterator() method or move back query execution to outside 
> GridQueryFieldsResultAdapter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-2231) 'Put' cache event treats entry created at lock acquisition time as old value

2016-12-07 Thread Denis Magda (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda updated IGNITE-2231:

Assignee: (was: wmz7year)

> 'Put' cache event treats entry created at lock acquisition time as old value
> 
>
> Key: IGNITE-2231
> URL: https://issues.apache.org/jira/browse/IGNITE-2231
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: ignite-1.4
>Reporter: Denis Magda
>  Labels: important
> Fix For: 2.0
>
> Attachments: EventsProcessingTest.java
>
>
> Subscribe for EVT_CACHE_OBJECT_PUT event on one node and perform 'put' 
> operations from the other to an empty transnational cache.
> The subscriber will receive a notification saying that there was an old value 
> for a key at the time the new was being inserted. In fact the was no an old 
> value, the entry with a {{null}} as a value was generated as a part of 
> implicit lock acquisition.
> This snippet of the code in {{GridCacheMapEntry}} generates a wrong event 
> (innerSet function)
> {noformat}
> if (evt && newVer != null && 
> cctx.events().isRecordable(EVT_CACHE_OBJECT_PUT)) {
> CacheObject evtOld = cctx.unwrapTemporary(old);
> cctx.events().addEvent(partition(),
> key,
> evtNodeId,
> tx == null ? null : tx.xid(),
> newVer,
> EVT_CACHE_OBJECT_PUT,
> val,
> val != null,
> evtOld,
> evtOld != null || hasValueUnlocked(),
> subjId, null, taskName,
> keepBinary);
> }
> {noformat}
> Attached the test that lets reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2358) Write behind store swallows exception trace in case of error

2016-12-07 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729091#comment-15729091
 ] 

Andrey Gura commented on IGNITE-2358:
-

Exception traces logging are already fixed (see IGNITE-3770). So only provided 
store implementations should be fixed.

> Write behind store swallows exception trace in case of error
> 
>
> Key: IGNITE-2358
> URL: https://issues.apache.org/jira/browse/IGNITE-2358
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Valentin Kulichenko
> Fix For: 2.0
>
>
> See {{GridCacheWriteBehindStore:708}}:
> {code}
> LT.warn(log, e, "Unable to update underlying store: " + store);
> {code}
> The exception trace is never printed out in case of warnings, therefore user 
> sees only message, which is usually not enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-2358) Write behind store swallows exception trace in case of error

2016-12-07 Thread Andrey Gura (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-2358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Gura reassigned IGNITE-2358:
---

Assignee: Andrey Gura

> Write behind store swallows exception trace in case of error
> 
>
> Key: IGNITE-2358
> URL: https://issues.apache.org/jira/browse/IGNITE-2358
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Andrey Gura
> Fix For: 2.0
>
>
> See {{GridCacheWriteBehindStore:708}}:
> {code}
> LT.warn(log, e, "Unable to update underlying store: " + store);
> {code}
> The exception trace is never printed out in case of warnings, therefore user 
> sees only message, which is usually not enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4140) KafkaStreamer should use tuple extractor instead of decoders

2016-12-07 Thread Anil (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729068#comment-15729068
 ] 

Anil commented on IGNITE-4140:
--

Thank you [~avinogradov] and [~vkulichenko]

> KafkaStreamer should use tuple extractor instead of decoders
> 
>
> Key: IGNITE-4140
> URL: https://issues.apache.org/jira/browse/IGNITE-4140
> Project: Ignite
>  Issue Type: Improvement
>  Components: streaming
>Affects Versions: 1.7
>Reporter: Valentin Kulichenko
>Assignee: Anil
>  Labels: patch-available
> Fix For: 1.9
>
>
> Current design of {{KafkaStreamer}} looks incorrect to me. In particular, it 
> extends {{StreamAdapter}}, but ignores tuple extractors provided there and 
> uses native Kafka decoders instead. This for example makes impossible to 
> produce several entries from one message, like it can be done via 
> {{StreamMultipleTupleExtractor}} in other streamers.
> To fix this, we should:
> # Declare the {{KafkaStreamer}} like this:
> {code}
> KafkaStreamer extends StreamAdapter, 
> K, V>
> {code}
> # Remove {{keyDecoder}} and {{valDecoder}} in favor of tuple extractors.
> # Instead of doing {{getStreamer().addData(...)}} directly, call 
> {{addMessage(...)}} method providing the raw message consumed from Kafka 
> ({{MessageAndMetadata}}). This method will make sure that 
> configured extractor is invoked and that all entries are added to 
> {{IgniteDataStreamer}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3292) Yardstick: add logging of preloading progress

2016-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729065#comment-15729065
 ] 

ASF GitHub Bot commented on IGNITE-3292:


Github user oleg-ostanin closed the pull request at:

https://github.com/apache/ignite/pull/1317


> Yardstick: add logging of preloading progress
> -
>
> Key: IGNITE-3292
> URL: https://issues.apache.org/jira/browse/IGNITE-3292
> Project: Ignite
>  Issue Type: Wish
>  Components: yardstick
>Reporter: Ksenia Rybakova
>Assignee: Oleg Ostanin
> Fix For: 1.9
>
>
> Please, add logging of preloading progress. This is really useful when we 
> load a lot of entries or they are large. 
> For instance: "Preloading: 200 of 1000 loaded".
> Also adding an option that controls frequency of such output makes sense. For 
> instance, this might be a step (number of entries loaded) - if entries are 
> large, we set small step, if entries are integers,  the step will be large.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-4367) .NET: Fix flaky tests

2016-12-07 Thread Pavel Tupitsyn (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn resolved IGNITE-4367.

Resolution: Fixed
  Assignee: (was: Pavel Tupitsyn)

> .NET: Fix flaky tests
> -
>
> Key: IGNITE-4367
> URL: https://issues.apache.org/jira/browse/IGNITE-4367
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Priority: Trivial
>  Labels: .NET
> Fix For: 1.9
>
>
> TeamCity has detected a number of flaky tests in .NET: 
> http://ci.ignite.apache.org/project.html?projectId=IgniteTests=flakyTests=IgniteTests_IgnitePlatformNetCoverage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3290) Yarstick: log when preloading/warmup started/finished

2016-12-07 Thread Nikolay Tikhonov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15729035#comment-15729035
 ] 

Nikolay Tikhonov commented on IGNITE-3290:
--

This ticket part of IGNITE-3292. 

> Yarstick: log when preloading/warmup started/finished
> -
>
> Key: IGNITE-3290
> URL: https://issues.apache.org/jira/browse/IGNITE-3290
> Project: Ignite
>  Issue Type: Wish
>  Components: yardstick
>Reporter: Ksenia Rybakova
>Assignee: Oleg Ostanin
>
> Please, add messages to log that whould indicate that:
> 1) preloading started
> 2) preloading finished
> 3) warmup started



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-4367) .NET: Fix flaky tests

2016-12-07 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728956#comment-15728956
 ] 

Pavel Tupitsyn edited comment on IGNITE-4367 at 12/7/16 3:17 PM:
-

* ReconnectTest.TestClusterRestart, ReconnectTest.TestFailedConnection, 
ExecutableTest.TestInvalidCmdArgs fixed (increased timeouts).

* TestLockSimple - never failed in master.

* EntityFrameworkCacheTest.*Multithreaded - reduce duration to avoid 
overloading SQL CE.




was (Author: ptupitsyn):
ReconnectTest.TestClusterRestart, ReconnectTest.TestFailedConnection, 
ExecutableTest.TestInvalidCmdArgs fixed (increased timeouts).

TestLockSimple - never failed in master.



> .NET: Fix flaky tests
> -
>
> Key: IGNITE-4367
> URL: https://issues.apache.org/jira/browse/IGNITE-4367
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Trivial
>  Labels: .NET
> Fix For: 1.9
>
>
> TeamCity has detected a number of flaky tests in .NET: 
> http://ci.ignite.apache.org/project.html?projectId=IgniteTests=flakyTests=IgniteTests_IgnitePlatformNetCoverage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-4367) .NET: Fix flaky tests

2016-12-07 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728956#comment-15728956
 ] 

Pavel Tupitsyn edited comment on IGNITE-4367 at 12/7/16 3:08 PM:
-

ReconnectTest.TestClusterRestart, ReconnectTest.TestFailedConnection, 
ExecutableTest.TestInvalidCmdArgs fixed (increased timeouts).

TestLockSimple - never failed in master.




was (Author: ptupitsyn):
ReconnectTest.TestClusterRestart, ReconnectTest.TestFailedConnection, 
ExecutableTest.TestInvalidCmdArgs fixed (increased timeouts).

> .NET: Fix flaky tests
> -
>
> Key: IGNITE-4367
> URL: https://issues.apache.org/jira/browse/IGNITE-4367
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Trivial
>  Labels: .NET
> Fix For: 1.9
>
>
> TeamCity has detected a number of flaky tests in .NET: 
> http://ci.ignite.apache.org/project.html?projectId=IgniteTests=flakyTests=IgniteTests_IgnitePlatformNetCoverage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4367) .NET: Fix flaky tests

2016-12-07 Thread Pavel Tupitsyn (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-4367:
---
Description: 
TeamCity has detected a number of flaky tests in .NET: 
http://ci.ignite.apache.org/project.html?projectId=IgniteTests=flakyTests=IgniteTests_IgnitePlatformNetCoverage

  was:
TeamCity has detected a number of flaky tests in .NET: 
http://ci.ignite.apache.org/project.html?projectId=IgniteTests=flakyTests


> .NET: Fix flaky tests
> -
>
> Key: IGNITE-4367
> URL: https://issues.apache.org/jira/browse/IGNITE-4367
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Trivial
>  Labels: .NET
> Fix For: 1.9
>
>
> TeamCity has detected a number of flaky tests in .NET: 
> http://ci.ignite.apache.org/project.html?projectId=IgniteTests=flakyTests=IgniteTests_IgnitePlatformNetCoverage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4367) .NET: Fix flaky tests

2016-12-07 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728956#comment-15728956
 ] 

Pavel Tupitsyn commented on IGNITE-4367:


ReconnectTest.TestClusterRestart, ReconnectTest.TestFailedConnection, 
ExecutableTest.TestInvalidCmdArgs fixed (increased timeouts).

> .NET: Fix flaky tests
> -
>
> Key: IGNITE-4367
> URL: https://issues.apache.org/jira/browse/IGNITE-4367
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Trivial
>  Labels: .NET
> Fix For: 1.9
>
>
> TeamCity has detected a number of flaky tests in .NET: 
> http://ci.ignite.apache.org/project.html?projectId=IgniteTests=flakyTests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3292) Yardstick: add logging of preloading progress

2016-12-07 Thread Oleg Ostanin (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728912#comment-15728912
 ] 

Oleg Ostanin commented on IGNITE-3292:
--

Thank you for your comment. I have already made some changes, please look:
http://reviews.ignite.apache.org/ignite/review/IGNT-CR-33

> Yardstick: add logging of preloading progress
> -
>
> Key: IGNITE-3292
> URL: https://issues.apache.org/jira/browse/IGNITE-3292
> Project: Ignite
>  Issue Type: Wish
>  Components: yardstick
>Reporter: Ksenia Rybakova
>Assignee: Oleg Ostanin
> Fix For: 1.9
>
>
> Please, add logging of preloading progress. This is really useful when we 
> load a lot of entries or they are large. 
> For instance: "Preloading: 200 of 1000 loaded".
> Also adding an option that controls frequency of such output makes sense. For 
> instance, this might be a step (number of entries loaded) - if entries are 
> large, we set small step, if entries are integers,  the step will be large.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-4009) OptimizedObjectOutputStream checks for error message of removed IgniteEmailProcessor

2016-12-07 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov reassigned IGNITE-4009:


Assignee: Semen Boikov  (was: Roman Shtykh)

> OptimizedObjectOutputStream checks for error message of removed 
> IgniteEmailProcessor
> 
>
> Key: IGNITE-4009
> URL: https://issues.apache.org/jira/browse/IGNITE-4009
> Project: Ignite
>  Issue Type: Bug
>Reporter: Roman Shtykh
>Assignee: Semen Boikov
>Priority: Minor
> Fix For: 1.8
>
>
> I wonder if we need a catch part with {{if 
> (CONVERTED_ERR.contains(t.getMessage()))}} of {{writeObjectOverride(...)}} at 
> all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Deleted] (IGNITE-4390) [IGNITE-4388] .NET: Improve LINQ error message when aggregates are used with multiple fields

2016-12-07 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov deleted IGNITE-4390:
-


> [IGNITE-4388] .NET: Improve LINQ error message when aggregates are used with 
> multiple fields
> 
>
> Key: IGNITE-4390
> URL: https://issues.apache.org/jira/browse/IGNITE-4390
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Tupitsyn
>Priority: Minor
>
> Current message is "Aggregate functions do not support multiple fields". This 
> happens when you try to apply aggregate function (such as COUNT) to a query 
> which selects multiple fields (select new {X, Y}). LINQ allows that, but we 
> can't express this in SQL.
> * See what Entity Framework does
> * Improve error message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Deleted] (IGNITE-4389) [IGNITE-4386] Hadoop tests affect each other through IgniteHadoopClientProtocolProvider#cliMap

2016-12-07 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov deleted IGNITE-4389:
-


> [IGNITE-4386] Hadoop tests affect each other through 
> IgniteHadoopClientProtocolProvider#cliMap
> --
>
> Key: IGNITE-4389
> URL: https://issues.apache.org/jira/browse/IGNITE-4389
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Veselovsky
>
> Tests affect each other through map
> org.apache.ignite.hadoop.mapreduce.IgniteHadoopClientProtocolProvider#cliMap 
> that never clears. 
> For example, test 
> org.apache.ignite.internal.processors.hadoop.impl.client.HadoopClientProtocolMultipleServersSelfTest#testSingleAddress
>  sometimes fails if test 
> org.apache.ignite.internal.processors.hadoop.impl.client.HadoopClientProtocolSelfTest#testJobCounters
>  runs before it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4389) [IGNITE-4386] Hadoop tests affect each other through IgniteHadoopClientProtocolProvider#cliMap

2016-12-07 Thread Semen Boikov (JIRA)
Semen Boikov created IGNITE-4389:


 Summary: [IGNITE-4386] Hadoop tests affect each other through 
IgniteHadoopClientProtocolProvider#cliMap
 Key: IGNITE-4389
 URL: https://issues.apache.org/jira/browse/IGNITE-4389
 Project: Ignite
  Issue Type: Bug
Reporter: Ivan Veselovsky


Tests affect each other through map
org.apache.ignite.hadoop.mapreduce.IgniteHadoopClientProtocolProvider#cliMap 
that never clears. 

For example, test 
org.apache.ignite.internal.processors.hadoop.impl.client.HadoopClientProtocolMultipleServersSelfTest#testSingleAddress
 sometimes fails if test 
org.apache.ignite.internal.processors.hadoop.impl.client.HadoopClientProtocolSelfTest#testJobCounters
 runs before it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (IGNITE-3292) Yardstick: add logging of preloading progress

2016-12-07 Thread Oleg Ostanin (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Ostanin updated IGNITE-3292:
-
Comment: was deleted

(was: Thank you for your comments. I have already made some changes, here is 
the example of log output:
http://reviews.ignite.apache.org/ignite/review/IGNT-CR-32)

> Yardstick: add logging of preloading progress
> -
>
> Key: IGNITE-3292
> URL: https://issues.apache.org/jira/browse/IGNITE-3292
> Project: Ignite
>  Issue Type: Wish
>  Components: yardstick
>Reporter: Ksenia Rybakova
>Assignee: Oleg Ostanin
> Fix For: 1.9
>
>
> Please, add logging of preloading progress. This is really useful when we 
> load a lot of entries or they are large. 
> For instance: "Preloading: 200 of 1000 loaded".
> Also adding an option that controls frequency of such output makes sense. For 
> instance, this might be a step (number of entries loaded) - if entries are 
> large, we set small step, if entries are integers,  the step will be large.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4390) [IGNITE-4388] .NET: Improve LINQ error message when aggregates are used with multiple fields

2016-12-07 Thread Semen Boikov (JIRA)
Semen Boikov created IGNITE-4390:


 Summary: [IGNITE-4388] .NET: Improve LINQ error message when 
aggregates are used with multiple fields
 Key: IGNITE-4390
 URL: https://issues.apache.org/jira/browse/IGNITE-4390
 Project: Ignite
  Issue Type: Improvement
Reporter: Pavel Tupitsyn
Priority: Minor


Current message is "Aggregate functions do not support multiple fields". This 
happens when you try to apply aggregate function (such as COUNT) to a query 
which selects multiple fields (select new {X, Y}). LINQ allows that, but we 
can't express this in SQL.

* See what Entity Framework does
* Improve error message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (IGNITE-3292) Yardstick: add logging of preloading progress

2016-12-07 Thread Oleg Ostanin (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Ostanin updated IGNITE-3292:
-
Comment: was deleted

(was: ```
log file looks like this:
...
<15:02:48> Preloading started.
<15:02:48> Preloading:atomic-index 0
(+0)
<15:02:48> Preloading:query0
(+0)
<15:02:48> Preloading:tx-fat-values0
(+0)
<15:02:48> Preloading:compute  0
(+0)
<15:02:49> Preloading:atomic-offheap-values0
(+0)
<15:02:49> Preloading:tx-offheap   0
(+0)
<15:02:49> Preloading:atomic-offheap   0
(+0)
<15:02:49> Preloading:orgCache 0
(+0)
<15:02:49> Preloading:atomic-fat-values0
(+0)
<15:02:49> Preloading:tx-offheap-values0
(+0)
...
<15:02:59> Preloading:atomic-index 
176841   (+176841)
<15:02:59> Preloading:query
188753   (+188753)
<15:02:59> Preloading:tx-fat-values
157049   (+157049)
<15:02:59> Preloading:compute  
185015   (+185015)
<15:02:59> Preloading:atomic-offheap-values
181587   (+181587)
<15:02:59> Preloading:tx-offheap   
150125   (+150125)
<15:02:59> Preloading:atomic-offheap   
149441   (+149441)
<15:02:59> Preloading:orgCache 
194663   (+194663)
<15:02:59> Preloading:atomic-fat-values
149060   (+149060)
<15:02:59> Preloading:tx-offheap-values
184565   (+184565)
...
<15:03:37> Preloading finished.
```
)

> Yardstick: add logging of preloading progress
> -
>
> Key: IGNITE-3292
> URL: https://issues.apache.org/jira/browse/IGNITE-3292
> Project: Ignite
>  Issue Type: Wish
>  Components: yardstick
>Reporter: Ksenia Rybakova
>Assignee: Oleg Ostanin
> Fix For: 1.9
>
>
> Please, add logging of preloading progress. This is really useful when we 
> load a lot of entries or they are large. 
> For instance: "Preloading: 200 of 1000 loaded".
> Also adding an option that controls frequency of such output makes sense. For 
> instance, this might be a step (number of entries loaded) - if entries are 
> large, we set small step, if entries are integers,  the step will be large.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4386) Hadoop tests affect each other through IgniteHadoopClientProtocolProvider#cliMap

2016-12-07 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728884#comment-15728884
 ] 

Ivan Veselovsky commented on IGNITE-4386:
-

Shall we also invoke 
org.apache.ignite.internal.client.GridClientFactory#stopAll() ?

> Hadoop tests affect each other through 
> IgniteHadoopClientProtocolProvider#cliMap
> 
>
> Key: IGNITE-4386
> URL: https://issues.apache.org/jira/browse/IGNITE-4386
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.7
>Reporter: Ivan Veselovsky
>Assignee: Vladimir Ozerov
> Fix For: 1.8
>
>
> Tests affect each other through map
> org.apache.ignite.hadoop.mapreduce.IgniteHadoopClientProtocolProvider#cliMap 
> that never clears. 
> For example, test 
> org.apache.ignite.internal.processors.hadoop.impl.client.HadoopClientProtocolMultipleServersSelfTest#testSingleAddress
>  sometimes fails if test 
> org.apache.ignite.internal.processors.hadoop.impl.client.HadoopClientProtocolSelfTest#testJobCounters
>  runs before it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3292) Yardstick: add logging of preloading progress

2016-12-07 Thread Oleg Ostanin (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728878#comment-15728878
 ] 

Oleg Ostanin commented on IGNITE-3292:
--

```
log file looks like this:
...
<15:02:48> Preloading started.
<15:02:48> Preloading:atomic-index 0
(+0)
<15:02:48> Preloading:query0
(+0)
<15:02:48> Preloading:tx-fat-values0
(+0)
<15:02:48> Preloading:compute  0
(+0)
<15:02:49> Preloading:atomic-offheap-values0
(+0)
<15:02:49> Preloading:tx-offheap   0
(+0)
<15:02:49> Preloading:atomic-offheap   0
(+0)
<15:02:49> Preloading:orgCache 0
(+0)
<15:02:49> Preloading:atomic-fat-values0
(+0)
<15:02:49> Preloading:tx-offheap-values0
(+0)
...
<15:02:59> Preloading:atomic-index 
176841   (+176841)
<15:02:59> Preloading:query
188753   (+188753)
<15:02:59> Preloading:tx-fat-values
157049   (+157049)
<15:02:59> Preloading:compute  
185015   (+185015)
<15:02:59> Preloading:atomic-offheap-values
181587   (+181587)
<15:02:59> Preloading:tx-offheap   
150125   (+150125)
<15:02:59> Preloading:atomic-offheap   
149441   (+149441)
<15:02:59> Preloading:orgCache 
194663   (+194663)
<15:02:59> Preloading:atomic-fat-values
149060   (+149060)
<15:02:59> Preloading:tx-offheap-values
184565   (+184565)
...
<15:03:37> Preloading finished.
```


> Yardstick: add logging of preloading progress
> -
>
> Key: IGNITE-3292
> URL: https://issues.apache.org/jira/browse/IGNITE-3292
> Project: Ignite
>  Issue Type: Wish
>  Components: yardstick
>Reporter: Ksenia Rybakova
>Assignee: Oleg Ostanin
> Fix For: 1.9
>
>
> Please, add logging of preloading progress. This is really useful when we 
> load a lot of entries or they are large. 
> For instance: "Preloading: 200 of 1000 loaded".
> Also adding an option that controls frequency of such output makes sense. For 
> instance, this might be a step (number of entries loaded) - if entries are 
> large, we set small step, if entries are integers,  the step will be large.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4341) Add TeraSort example as a unit test to Ignite

2016-12-07 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728873#comment-15728873
 ] 

Ivan Veselovsky commented on IGNITE-4341:
-

branch "ignite-4341-master2"

> Add TeraSort example as a unit test to Ignite
> -
>
> Key: IGNITE-4341
> URL: https://issues.apache.org/jira/browse/IGNITE-4341
> Project: Ignite
>  Issue Type: Test
>  Components: hadoop
>Affects Versions: 1.7
>Reporter: Ivan Veselovsky
>Assignee: Vladimir Ozerov
> Fix For: 2.0
>
>
> Add canonical TeraSort example as a unit test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3292) Yardstick: add logging of preloading progress

2016-12-07 Thread Oleg Ostanin (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728867#comment-15728867
 ] 

Oleg Ostanin commented on IGNITE-3292:
--

Thank you for your comments. I have already made some changes, here is the 
example of log output:
http://reviews.ignite.apache.org/ignite/review/IGNT-CR-32

> Yardstick: add logging of preloading progress
> -
>
> Key: IGNITE-3292
> URL: https://issues.apache.org/jira/browse/IGNITE-3292
> Project: Ignite
>  Issue Type: Wish
>  Components: yardstick
>Reporter: Ksenia Rybakova
>Assignee: Oleg Ostanin
> Fix For: 1.9
>
>
> Please, add logging of preloading progress. This is really useful when we 
> load a lot of entries or they are large. 
> For instance: "Preloading: 200 of 1000 loaded".
> Also adding an option that controls frequency of such output makes sense. For 
> instance, this might be a step (number of entries loaded) - if entries are 
> large, we set small step, if entries are integers,  the step will be large.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4388) .NET: Improve LINQ error message when aggregates are used with multiple fields

2016-12-07 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-4388:
--

 Summary: .NET: Improve LINQ error message when aggregates are used 
with multiple fields
 Key: IGNITE-4388
 URL: https://issues.apache.org/jira/browse/IGNITE-4388
 Project: Ignite
  Issue Type: Improvement
  Components: platforms
Affects Versions: 1.7, 1.8
Reporter: Pavel Tupitsyn
Priority: Trivial
 Fix For: 1.9


Current message is "Aggregate functions do not support multiple fields". This 
happens when you try to apply aggregate function (such as COUNT) to a query 
which selects multiple fields (select new {X, Y}). LINQ allows that, but we 
can't express this in SQL.

* See what Entity Framework does
* Improve error message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4386) Hadoop tests affect each other through IgniteHadoopClientProtocolProvider#cliMap

2016-12-07 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728858#comment-15728858
 ] 

Ivan Veselovsky commented on IGNITE-4386:
-

https://github.com/apache/ignite/pull/1327 

> Hadoop tests affect each other through 
> IgniteHadoopClientProtocolProvider#cliMap
> 
>
> Key: IGNITE-4386
> URL: https://issues.apache.org/jira/browse/IGNITE-4386
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.7
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
> Fix For: 1.8
>
>
> Tests affect each other through map
> org.apache.ignite.hadoop.mapreduce.IgniteHadoopClientProtocolProvider#cliMap 
> that never clears. 
> For example, test 
> org.apache.ignite.internal.processors.hadoop.impl.client.HadoopClientProtocolMultipleServersSelfTest#testSingleAddress
>  sometimes fails if test 
> org.apache.ignite.internal.processors.hadoop.impl.client.HadoopClientProtocolSelfTest#testJobCounters
>  runs before it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-3558) Affinity task hangs when Collision SPI produces a lot of job rejections & Failover SPI produces many attempts

2016-12-07 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722540#comment-15722540
 ] 

Taras Ledkov edited comment on IGNITE-3558 at 12/7/16 2:07 PM:
---

Pull request to run tests: 
[pull/1326|https://github.com/apache/ignite/pull/1326]


was (Author: tledkov-gridgain):
Pull request to run tests: 
[pull/1316|https://github.com/apache/ignite/pull/1316]

> Affinity task hangs when Collision SPI produces a lot of job rejections & 
> Failover SPI produces many attempts
> -
>
> Key: IGNITE-3558
> URL: https://issues.apache.org/jira/browse/IGNITE-3558
> Project: Ignite
>  Issue Type: Bug
>  Components: compute
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
> Fix For: 2.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> The test to reproduce:
> IgniteCacheLockPartitionOnAffinityRunWithCollisionSpiTest#testJobFinishing
> *Root cause*
> GridJobExecuteResponse isn't set from target node because there is a 
> confusion with GridJobWorker instances in the CollisionContext.
> *Suggestion*
> The method GridJobProcessor.CollisionJobContext.cancel()
> use passiveJobs.remove(jobWorker.getJobId(), jobWorker). 
> *passiveJobs* is a ConcurrentHashMap and GridJobWorker.equals() implements as 
> a equation of jobId.
> So, when two thread try to cancel the two workers with *the same jobIds* we 
> have the case:
> - thread0 remove jobWorker0 & cancel jobWorker0.
> - thread0 put jobWorker1 (because jobWorker0 already removed);
> - thread1: (has a copy of jobWorker0) and try to cancel it.
> - thread1: remove jobWorker1 instead of jobWorker0 (because jobId is used to 
> identify);
> - thread1: doesn't send ExecuteResponse because jobWorker0 has been canceled.
> *Proposal*
> Try to use system default equals for the GridJobWorker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3290) Yarstick: log when preloading/warmup started/finished

2016-12-07 Thread Oleg Ostanin (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728847#comment-15728847
 ] 

Oleg Ostanin commented on IGNITE-3290:
--

Working on the task, added printing logs for preloading.

> Yarstick: log when preloading/warmup started/finished
> -
>
> Key: IGNITE-3290
> URL: https://issues.apache.org/jira/browse/IGNITE-3290
> Project: Ignite
>  Issue Type: Wish
>  Components: yardstick
>Reporter: Ksenia Rybakova
>Assignee: Oleg Ostanin
>
> Please, add messages to log that whould indicate that:
> 1) preloading started
> 2) preloading finished
> 3) warmup started



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4387) Incorrect Karaf OSGI feature for ignite-rest-http

2016-12-07 Thread Sergej Sidorov (JIRA)
Sergej Sidorov created IGNITE-4387:
--

 Summary: Incorrect Karaf OSGI feature for ignite-rest-http
 Key: IGNITE-4387
 URL: https://issues.apache.org/jira/browse/IGNITE-4387
 Project: Ignite
  Issue Type: Bug
  Components: osgi, rest
Affects Versions: 1.7
Reporter: Sergej Sidorov
Assignee: Sergej Sidorov
Priority: Minor


In accordance with the commit f177d4312c47 (IGNITE-3277 Replaced outdated 
json-lib 2.4 to modern Jackson 2.7.5) dependencies in module ignite-rest-http 
have been updated. But Karaf OSGI features config has not been updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4386) Hadoop tests affect each other through IgniteHadoopClientProtocolProvider#cliMap

2016-12-07 Thread Ivan Veselovsky (JIRA)
Ivan Veselovsky created IGNITE-4386:
---

 Summary: Hadoop tests affect each other through 
IgniteHadoopClientProtocolProvider#cliMap
 Key: IGNITE-4386
 URL: https://issues.apache.org/jira/browse/IGNITE-4386
 Project: Ignite
  Issue Type: Bug
  Components: hadoop
Affects Versions: 1.7
Reporter: Ivan Veselovsky
Assignee: Ivan Veselovsky
 Fix For: 1.8


Tests affect each other through map
org.apache.ignite.hadoop.mapreduce.IgniteHadoopClientProtocolProvider#cliMap 
that never clears. 

For example, test 
org.apache.ignite.internal.processors.hadoop.impl.client.HadoopClientProtocolMultipleServersSelfTest#testSingleAddress
 sometimes fails if test 
org.apache.ignite.internal.processors.hadoop.impl.client.HadoopClientProtocolSelfTest#testJobCounters
 runs before it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4358) Better error reporting in case of unmarshallable classes.

2016-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728835#comment-15728835
 ] 

ASF GitHub Bot commented on IGNITE-4358:


GitHub user rmohta opened a pull request:

https://github.com/apache/ignite/pull/1325

IGNITE-4358: NPE Custom message

Validate non-null runnable.
Custom message to give more information, why is the user seeing NPE.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rmohta/ignite improvement/IGNITE-4358

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1325.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1325


commit 2dce408b36113b91007975028ce985751c52fe99
Author: rmohta 
Date:   2016-12-05T18:25:32Z

IGNITE-4358: NPE Custom message

Validate non-null runnable.
Custom message to give more information, why is the user seeing NPE.




> Better error reporting in case of unmarshallable classes.
> -
>
> Key: IGNITE-4358
> URL: https://issues.apache.org/jira/browse/IGNITE-4358
> Project: Ignite
>  Issue Type: Improvement
>  Components: compute, messaging, newbie
>Affects Versions: 1.6
>Reporter: Alexei Scherbakov
>Assignee: Rohit Mohta
>Priority: Trivial
>  Labels: newbie
> Fix For: 2,0
>
> Attachments: IGNITE-4358-Exceptionlog-05Dec2016.txt, 
> IGNITE-4358-GridClosureProcessor-05Dec2016.patch, PureIgniteRunTest.java
>
>
> When trying to execute Thread's derived class implementing IgniteRunnable 
> using compute API, it silently serializes to null because Thread 
> serialization is prohibited in MarshallerExclusions and throws NPE on 
> executing node.
> We need to throw more informative exception for such case.
> Reproducer in the attachment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-4045) .NET: Support DML API

2016-12-07 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728746#comment-15728746
 ] 

Pavel Tupitsyn edited comment on IGNITE-4045 at 12/7/16 1:28 PM:
-

FieldIdentityResolver issue uncovered:
Hash codes for many basic types (strings, for example) are computed differently 
in .NET and Java. 

What should we do?
* Implement hash code algorithms the same way as in Java for all basic types
* Change Java FieldIdentityResolver to compute field hash codes in serialized 
form (which is also a lot faster for all platforms)
* Remove FieldIdentityResolver support in .NET

For first iteration I suggest to support only ArrayIdentityResolver


was (Author: ptupitsyn):
FieldIdentityResolver issue uncovered:
Hash codes for many basic types (strings, for example) are computed differently 
in .NET and Java. 

What should we do?
* Implement hash code algorithms the same way as in Java for all basic types
* Remove FieldIdentityResolver support in .NET

> .NET: Support DML API
> -
>
> Key: IGNITE-4045
> URL: https://issues.apache.org/jira/browse/IGNITE-4045
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Denis Magda
>Assignee: Pavel Tupitsyn
>  Labels: roadmap
> Fix For: 2.0
>
>
> Ignite's Java component will provide support for DML soon (IGNITE-2294). At 
> she same time DML will be supported at the level of ODBC and JDBC drivers.
> As the next step we should include the similar functionality into Ignite.NET 
> by doing the following:
> - Implement DML API;
> - Enhance {{QueryExample.cs}} by doing INSERTs instead of cache.puts and 
> adding UPDATE and DELETE operation examples.
> - Add documentation to Ignite.NET readme.io covering the feature. Most like 
> most of the content can be take from the general documentation when this 
> ticket IGNITE-4018 is ready 
> (https://apacheignite.readme.io/docs/distributed-dml).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4045) .NET: Support DML API

2016-12-07 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728746#comment-15728746
 ] 

Pavel Tupitsyn commented on IGNITE-4045:


FieldIdentityResolver issue uncovered:
Hash codes for many basic types (strings, for example) are computed differently 
in .NET and Java. 

What should we do?
* Implement hash code algorithms the same way as in Java for all basic types
* Remove FieldIdentityResolver support in .NET

> .NET: Support DML API
> -
>
> Key: IGNITE-4045
> URL: https://issues.apache.org/jira/browse/IGNITE-4045
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Denis Magda
>Assignee: Pavel Tupitsyn
>  Labels: roadmap
> Fix For: 2.0
>
>
> Ignite's Java component will provide support for DML soon (IGNITE-2294). At 
> she same time DML will be supported at the level of ODBC and JDBC drivers.
> As the next step we should include the similar functionality into Ignite.NET 
> by doing the following:
> - Implement DML API;
> - Enhance {{QueryExample.cs}} by doing INSERTs instead of cache.puts and 
> adding UPDATE and DELETE operation examples.
> - Add documentation to Ignite.NET readme.io covering the feature. Most like 
> most of the content can be take from the general documentation when this 
> ticket IGNITE-4018 is ready 
> (https://apacheignite.readme.io/docs/distributed-dml).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-4046) C++: Support DML API

2016-12-07 Thread Igor Sapego (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728740#comment-15728740
 ] 

Igor Sapego edited comment on IGNITE-4046 at 12/7/16 1:22 PM:
--

Implemented algorithm that matches {{BinaryArrayIdentityResolver}}'s. When I've 
tried to implement {{BinaryFieldIdentityResolver}} I've found out that in Java 
it calculates hash by applying hash function to set of the fields hashes. Field 
hashes in their turn calculated using Java-specific algorithms and Java data 
types internal structure. I don't see a way to implement the algorithm, which 
is going to be able to the same hash in C++ or any other platform.

It is obvious that the portable hashing algorithm should only use binary format 
to generate object hash. {{BinaryFieldIdentityResolver}} is not portable as for 
now.


was (Author: isapego):
Implemented algorithm that matches {{BinaryArrayIdentityResolver}}'s. When I've 
tried to implement {{BinaryFieldIdentityResolver}} I've found out that in Java 
it calculates hash by applying hash function to set of the fields hashes. Field 
hashes in their turn calculated using Java-specific algorithms and Java data 
types internal structure. I don't see a way to implement the algorithm, which 
is going to be able to the same hash in C++ or any other platform.

> C++: Support DML API
> 
>
> Key: IGNITE-4046
> URL: https://issues.apache.org/jira/browse/IGNITE-4046
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Denis Magda
>Assignee: Igor Sapego
>  Labels: roadmap
> Fix For: 2.0
>
>
> Ignite's Java component will provide support for DML soon (IGNITE-2294). At 
> she same time DML will be supported at the level of ODBC and JDBC drivers.
> As the next step we should include the similar functionality into Ignite.C++ 
> by doing the following:
> - Implement DML API;
> - Enhance {{query_example.cpp}} by doing INSERTs instead of cache.puts and 
> adding UPDATE and DELETE operation examples.
> - Add documentation to Ignite.C++ readme.io covering the feature. Most like 
> most of the content can be take from the general documentation when this 
> ticket IGNITE-4018 is ready.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4046) C++: Support DML API

2016-12-07 Thread Igor Sapego (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728740#comment-15728740
 ] 

Igor Sapego commented on IGNITE-4046:
-

Implemented algorithm that matches {{BinaryArrayIdentityResolver}}'s. When I've 
tried to implement {{BinaryFieldIdentityResolver}} I've found out that in Java 
it calculates hash by applying hash function to set of the fields hashes. Field 
hashes in their turn calculated using Java-specific algorithms and Java data 
types internal structure. I don't see a way to implement the algorithm, which 
is going to be able to the same hash in C++ or any other platform.

> C++: Support DML API
> 
>
> Key: IGNITE-4046
> URL: https://issues.apache.org/jira/browse/IGNITE-4046
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Denis Magda
>Assignee: Igor Sapego
>  Labels: roadmap
> Fix For: 2.0
>
>
> Ignite's Java component will provide support for DML soon (IGNITE-2294). At 
> she same time DML will be supported at the level of ODBC and JDBC drivers.
> As the next step we should include the similar functionality into Ignite.C++ 
> by doing the following:
> - Implement DML API;
> - Enhance {{query_example.cpp}} by doing INSERTs instead of cache.puts and 
> adding UPDATE and DELETE operation examples.
> - Add documentation to Ignite.C++ readme.io covering the feature. Most like 
> most of the content can be take from the general documentation when this 
> ticket IGNITE-4018 is ready.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2703) .NET: Dynamically registered classes must use binary serialization if possible

2016-12-07 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728675#comment-15728675
 ] 

Pavel Tupitsyn commented on IGNITE-2703:


Merged with IGNITE-4157, everything works.

The only issue is with binary classless scenario (BinaryModeExample, etc) - we 
have to swallow "ClassNotFoundException: Unknown pair" in .NET, which seems 
wrong.
In similar scenario in Java we don't even call MarshallerContext.

> .NET: Dynamically registered classes must use binary serialization if possible
> --
>
> Key: IGNITE-2703
> URL: https://issues.apache.org/jira/browse/IGNITE-2703
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Affects Versions: 1.5.0.final
>Reporter: Vladimir Ozerov
>Assignee: Pavel Tupitsyn
>Priority: Critical
>  Labels: .net, breaking-api, roadmap
> Fix For: 2.0
>
>
> At present we support dynamic class registration in .NET, but they are 
> written using deafult .NET mechanism. This is counterintuitive for users and 
> not consistent with Java, where such classes are written in binary form.
> Proposed implementation plan:
> 1) For each dynamically registered class we must understand whether it could 
> be serialized through binary or not. If not - print a warning and fallback to 
> .NET.
> 2) Before writing a class we must ensure that it's [typeId -> name] pair is 
> known to the cluster. If not - write full class name instead of type ID. Java 
> already do that.
> 3) Last, to support backward compatibility we must be able to fallback to 
> current mode with help of some boolean flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4385) .NET: LINQ requires IQueryable in joins

2016-12-07 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-4385:
--

 Summary: .NET: LINQ requires IQueryable in joins
 Key: IGNITE-4385
 URL: https://issues.apache.org/jira/browse/IGNITE-4385
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Affects Versions: 1.7, 1.8
Reporter: Pavel Tupitsyn
Priority: Minor
 Fix For: 1.9


This does not work:
{code}
var q3 = from ic in GetCache().AsCacheQueryable()
from pp in GetCache().AsCacheQueryable()
select pp.Value;
{code}

Error:
{code}
Unexpected query source: GetCache().AsCacheQueryable()
{code}

And this does work:
{code}
var contracts = GetCache().AsCacheQueryable();
var paymentPlans = GetCache().AsCacheQueryable();
var q3 = from ic in contracts
from pp in paymentPlans
select pp.Value;
{code}

While it is usually possible to extract variables, this is an inconvenience. 
See if we can support such expressions (evaluate irrelevant part automatically).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4384) Web console: Invalid Optimazed marshaller configuration.

2016-12-07 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-4384:
-

 Summary: Web console: Invalid Optimazed marshaller configuration.
 Key: IGNITE-4384
 URL: https://issues.apache.org/jira/browse/IGNITE-4384
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Affects Versions: 1.9
Reporter: Vasiliy Sisko
 Fix For: 1.9


Wrong default value for require serializable field.
Missed ID mapper field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-4058) Divide JVM_OPTS in yardstick properies file

2016-12-07 Thread Oleg Ostanin (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Ostanin resolved IGNITE-4058.
--
Resolution: Fixed

> Divide JVM_OPTS in yardstick properies file
> ---
>
> Key: IGNITE-4058
> URL: https://issues.apache.org/jira/browse/IGNITE-4058
> Project: Ignite
>  Issue Type: Task
>  Components: yardstick
>Affects Versions: 1.7
>Reporter: Sergey Kozlov
>Assignee: Oleg Ostanin
>
> Yardstick properties file has only JVM_OPTS parameter applied to both server 
> and driver nodes. But due logic of some load tests and benchmarks we need to 
> pass different parameters for servers and drivers line heap size (-Xmx, 
> -Xms). I suggest to introduce DRIVER_JVM_OPTS and SERVER_JVM_OPTS. JVM_OPTS 
> should be included into both new parameters and becomes the parameter of 
> general JVM options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4058) Divide JVM_OPTS in yardstick properies file

2016-12-07 Thread Nikolay Tikhonov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728556#comment-15728556
 ] 

Nikolay Tikhonov commented on IGNITE-4058:
--

[~oleg-ostanin]
Looks good for me.

> Divide JVM_OPTS in yardstick properies file
> ---
>
> Key: IGNITE-4058
> URL: https://issues.apache.org/jira/browse/IGNITE-4058
> Project: Ignite
>  Issue Type: Task
>  Components: yardstick
>Affects Versions: 1.7
>Reporter: Sergey Kozlov
>Assignee: Oleg Ostanin
>
> Yardstick properties file has only JVM_OPTS parameter applied to both server 
> and driver nodes. But due logic of some load tests and benchmarks we need to 
> pass different parameters for servers and drivers line heap size (-Xmx, 
> -Xms). I suggest to introduce DRIVER_JVM_OPTS and SERVER_JVM_OPTS. JVM_OPTS 
> should be included into both new parameters and becomes the parameter of 
> general JVM options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4379) SQL: Local SQL field query broken in master

2016-12-07 Thread Andrew Mashenkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728494#comment-15728494
 ] 

Andrew Mashenkov commented on IGNITE-4379:
--

Fixed.
Pending for TC tests.

> SQL: Local SQL field query broken in master
> ---
>
> Key: IGNITE-4379
> URL: https://issues.apache.org/jira/browse/IGNITE-4379
> Project: Ignite
>  Issue Type: Bug
>  Components: SQL
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
> Fix For: 2.0
>
>
> Local SQL field query returns entries from backup partition.
> For now query context is being cleared in 
> IgniteH2Indexing.queryLocalSqlFields() method before query is really executed.
> It seems, GridQueryFieldsResultAdapter should restore query context before 
> query run in iterator() method or move back query execution to outside 
> GridQueryFieldsResultAdapter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4379) SQL: Local SQL field query broken in master

2016-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728467#comment-15728467
 ] 

ASF GitHub Bot commented on IGNITE-4379:


GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/1323

IGNITE-4379: SQL: Local SQL field query broken in master

Fixed broken local SQLFieldsQuery.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-4379

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1323.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1323


commit 312a64be2bcbf44e50e95595c3207a6c130dc486
Author: Andrey V. Mashenkov 
Date:   2016-12-07T10:50:37Z

Fixed broken local SQLFieldsQuery.




> SQL: Local SQL field query broken in master
> ---
>
> Key: IGNITE-4379
> URL: https://issues.apache.org/jira/browse/IGNITE-4379
> Project: Ignite
>  Issue Type: Bug
>  Components: SQL
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
> Fix For: 2.0
>
>
> Local SQL field query returns entries from backup partition.
> For now query context is being cleared in 
> IgniteH2Indexing.queryLocalSqlFields() method before query is really executed.
> It seems, GridQueryFieldsResultAdapter should restore query context before 
> query run in iterator() method or move back query execution to outside 
> GridQueryFieldsResultAdapter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4058) Divide JVM_OPTS in yardstick properies file

2016-12-07 Thread Oleg Ostanin (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728453#comment-15728453
 ] 

Oleg Ostanin commented on IGNITE-4058:
--

Fixed readme file.

> Divide JVM_OPTS in yardstick properies file
> ---
>
> Key: IGNITE-4058
> URL: https://issues.apache.org/jira/browse/IGNITE-4058
> Project: Ignite
>  Issue Type: Task
>  Components: yardstick
>Affects Versions: 1.7
>Reporter: Sergey Kozlov
>Assignee: Oleg Ostanin
>
> Yardstick properties file has only JVM_OPTS parameter applied to both server 
> and driver nodes. But due logic of some load tests and benchmarks we need to 
> pass different parameters for servers and drivers line heap size (-Xmx, 
> -Xms). I suggest to introduce DRIVER_JVM_OPTS and SERVER_JVM_OPTS. JVM_OPTS 
> should be included into both new parameters and becomes the parameter of 
> general JVM options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-4382) .NET: Documentation on calling Java services

2016-12-07 Thread Pavel Tupitsyn (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn resolved IGNITE-4382.

Resolution: Done
  Assignee: (was: Pavel Tupitsyn)

> .NET: Documentation on calling Java services
> 
>
> Key: IGNITE-4382
> URL: https://issues.apache.org/jira/browse/IGNITE-4382
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 1.8
>
>
> Calling Java services from .NET was implemented in IGNITE-2686, but not 
> documented anywhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4382) .NET: Documentation on calling Java services

2016-12-07 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728347#comment-15728347
 ] 

Pavel Tupitsyn commented on IGNITE-4382:


Done: https://apacheignite-net.readme.io/docs/calling-java-services

> .NET: Documentation on calling Java services
> 
>
> Key: IGNITE-4382
> URL: https://issues.apache.org/jira/browse/IGNITE-4382
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 1.8
>
>
> Calling Java services from .NET was implemented in IGNITE-2686, but not 
> documented anywhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4383) Update section on binary marshaller in Apache Ignite Readme.io docs

2016-12-07 Thread Alexander Paschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Paschenko updated IGNITE-4383:

Description: Section about default behavior changes should be updated and 
one about identity resolvers has to be added.

> Update section on binary marshaller in Apache Ignite Readme.io docs
> ---
>
> Key: IGNITE-4383
> URL: https://issues.apache.org/jira/browse/IGNITE-4383
> Project: Ignite
>  Issue Type: Task
>  Components: binary, documentation
>Reporter: Alexander Paschenko
>Assignee: Alexander Paschenko
> Fix For: 1.8
>
>
> Section about default behavior changes should be updated and one about 
> identity resolvers has to be added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4383) Update section on binary marshaller in Apache Ignite Readme.io docs

2016-12-07 Thread Alexander Paschenko (JIRA)
Alexander Paschenko created IGNITE-4383:
---

 Summary: Update section on binary marshaller in Apache Ignite 
Readme.io docs
 Key: IGNITE-4383
 URL: https://issues.apache.org/jira/browse/IGNITE-4383
 Project: Ignite
  Issue Type: Task
  Components: binary, documentation
Reporter: Alexander Paschenko
Assignee: Alexander Paschenko
 Fix For: 1.8






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3959) SQL: Optimize Date\Time fields conversion.

2016-12-07 Thread Sergi Vladykin (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728235#comment-15728235
 ] 

Sergi Vladykin commented on IGNITE-3959:


May be this must be fixed in H2 database, no?

> SQL: Optimize Date\Time fields conversion.
> --
>
> Key: IGNITE-3959
> URL: https://issues.apache.org/jira/browse/IGNITE-3959
> Project: Ignite
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 1.6
>Reporter: Andrew Mashenkov
>  Labels: newbie, performance
> Fix For: 2.0
>
>
> SqlFieldsQueries slowdown on date\time fields processing due to ineffective 
> java.util.Calendar usage for date manipulation by H2 database.
> Good point to start is IgniteH2Indexing.wrap() method. Make optimization for 
> types DATE and TIME as it already done for TIMESTAMP type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4382) .NET: Documentation on calling Java services

2016-12-07 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-4382:
--

 Summary: .NET: Documentation on calling Java services
 Key: IGNITE-4382
 URL: https://issues.apache.org/jira/browse/IGNITE-4382
 Project: Ignite
  Issue Type: Task
  Components: platforms
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 1.8


Calling Java services from .NET was implemented in IGNITE-2686, but not 
documented anywhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4140) KafkaStreamer should use tuple extractor instead of decoders

2016-12-07 Thread Anton Vinogradov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728199#comment-15728199
 ] 

Anton Vinogradov commented on IGNITE-4140:
--

Anil, 

Thaks for contribution!
Changes merged to master branch.

> KafkaStreamer should use tuple extractor instead of decoders
> 
>
> Key: IGNITE-4140
> URL: https://issues.apache.org/jira/browse/IGNITE-4140
> Project: Ignite
>  Issue Type: Improvement
>  Components: streaming
>Affects Versions: 1.7
>Reporter: Valentin Kulichenko
>Assignee: Anil
>  Labels: patch-available
> Fix For: 1.9
>
>
> Current design of {{KafkaStreamer}} looks incorrect to me. In particular, it 
> extends {{StreamAdapter}}, but ignores tuple extractors provided there and 
> uses native Kafka decoders instead. This for example makes impossible to 
> produce several entries from one message, like it can be done via 
> {{StreamMultipleTupleExtractor}} in other streamers.
> To fix this, we should:
> # Declare the {{KafkaStreamer}} like this:
> {code}
> KafkaStreamer extends StreamAdapter, 
> K, V>
> {code}
> # Remove {{keyDecoder}} and {{valDecoder}} in favor of tuple extractors.
> # Instead of doing {{getStreamer().addData(...)}} directly, call 
> {{addMessage(...)}} method providing the raw message consumed from Kafka 
> ({{MessageAndMetadata}}). This method will make sure that 
> configured extractor is invoked and that all entries are added to 
> {{IgniteDataStreamer}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-4140) KafkaStreamer should use tuple extractor instead of decoders

2016-12-07 Thread Anton Vinogradov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov resolved IGNITE-4140.
--
   Resolution: Fixed
Fix Version/s: (was: 2.0)
   1.9

> KafkaStreamer should use tuple extractor instead of decoders
> 
>
> Key: IGNITE-4140
> URL: https://issues.apache.org/jira/browse/IGNITE-4140
> Project: Ignite
>  Issue Type: Improvement
>  Components: streaming
>Affects Versions: 1.7
>Reporter: Valentin Kulichenko
>Assignee: Anil
>  Labels: patch-available
> Fix For: 1.9
>
>
> Current design of {{KafkaStreamer}} looks incorrect to me. In particular, it 
> extends {{StreamAdapter}}, but ignores tuple extractors provided there and 
> uses native Kafka decoders instead. This for example makes impossible to 
> produce several entries from one message, like it can be done via 
> {{StreamMultipleTupleExtractor}} in other streamers.
> To fix this, we should:
> # Declare the {{KafkaStreamer}} like this:
> {code}
> KafkaStreamer extends StreamAdapter, 
> K, V>
> {code}
> # Remove {{keyDecoder}} and {{valDecoder}} in favor of tuple extractors.
> # Instead of doing {{getStreamer().addData(...)}} directly, call 
> {{addMessage(...)}} method providing the raw message consumed from Kafka 
> ({{MessageAndMetadata}}). This method will make sure that 
> configured extractor is invoked and that all entries are added to 
> {{IgniteDataStreamer}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4381) Need ensure that internal threads do not execute blocking operations

2016-12-07 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov updated IGNITE-4381:
-
Description: 
If internal threads execute blocking operation this can cause starvation and 
hangs (example of issue https://issues.apache.org/jira/browse/IGNITE-4371).
Ideally we need a way to 'automatically' find all such places in code, 
straightforward idea is add assert in GridFutureAdapter.get - assert should 
fail if it is called by system thread and future is not finished. At least one 
issue here is that currently system threads can be blocked on operation on 
utility/marshaller cache, so assert should also take it into account.

Another idea is execute tests with number of threads in all pools = 1, this 
also should reveal issues with blocking calls.

  was:
If internal threads execute blocking operation this can cause starvation and 
hangs (example of issue https://issues.apache.org/jira/browse/IGNITE-4371).
Ideally we need a way to 'automatically' find all such places in code, 
straightforward idea is add assert in GridFutureAdapter.get - assert should 
fail if it is called by system thread and future is not finished. At least one 
issue here is that currently system threads can be blocked on operation on 
utility/marshaller cache, so assert should also take it into account.


> Need ensure that internal threads do not execute blocking operations
> 
>
> Key: IGNITE-4381
> URL: https://issues.apache.org/jira/browse/IGNITE-4381
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Reporter: Semen Boikov
>Assignee: Konstantin Dudkov
> Fix For: 2.0
>
>
> If internal threads execute blocking operation this can cause starvation and 
> hangs (example of issue https://issues.apache.org/jira/browse/IGNITE-4371).
> Ideally we need a way to 'automatically' find all such places in code, 
> straightforward idea is add assert in GridFutureAdapter.get - assert should 
> fail if it is called by system thread and future is not finished. At least 
> one issue here is that currently system threads can be blocked on operation 
> on utility/marshaller cache, so assert should also take it into account.
> Another idea is execute tests with number of threads in all pools = 1, this 
> also should reveal issues with blocking calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4381) Need ensure that internal threads do not execute blocking operations

2016-12-07 Thread Semen Boikov (JIRA)
Semen Boikov created IGNITE-4381:


 Summary: Need ensure that internal threads do not execute blocking 
operations
 Key: IGNITE-4381
 URL: https://issues.apache.org/jira/browse/IGNITE-4381
 Project: Ignite
  Issue Type: Task
  Components: general
Reporter: Semen Boikov
Assignee: Konstantin Dudkov
 Fix For: 2.0


If internal threads execute blocking operation this can cause starvation and 
hangs (example of issue https://issues.apache.org/jira/browse/IGNITE-4371).
Ideally we need a way to 'automatically' find all such places in code, 
straightforward idea is add assert in GridFutureAdapter.get - assert should 
fail if it is called by system thread and future is not finished. At least one 
issue here is that currently system threads can be blocked on operation on 
utility/marshaller cache, so assert should also take it into account.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4380) Cache invoke calls can be lost

2016-12-07 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov updated IGNITE-4380:
-
Description: 
Recently added test GridCacheAbstractFullApiSelfTest.testInvokeAllMultithreaded 
fails on TC in various configurations with transactional cache.

Example of failure 
GridCacheReplicatedOffHeapTieredMultiNodeFullApiSelfTest.testInvokeAllMultithreaded:
{noformat}
junit.framework.AssertionFailedError: expected:<2> but was:<10868>
at junit.framework.Assert.fail(Assert.java:57)
at junit.framework.Assert.failNotEquals(Assert.java:329)
at junit.framework.Assert.assertEquals(Assert.java:78)
at junit.framework.Assert.assertEquals(Assert.java:234)
at junit.framework.Assert.assertEquals(Assert.java:241)
at junit.framework.TestCase.assertEquals(TestCase.java:409)
at 
org.apache.ignite.internal.processors.cache.GridCacheAbstractFullApiSelfTest.testInvokeAllMultithreaded(GridCacheAbstractFullApiSelfTest.java:342)
at sun.reflect.GeneratedMethodAccessor96.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:1803)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:118)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$4.run(GridAbstractTest.java:1718)
at java.lang.Thread.run(Thread.java:745)
{noformat}

  was:Recently added test 
GridCacheAbstractFullApiSelfTest.testInvokeAllMultithreaded fails on TC in 
various configurations with transactional cache.


> Cache invoke calls can be lost
> --
>
> Key: IGNITE-4380
> URL: https://issues.apache.org/jira/browse/IGNITE-4380
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Semen Boikov
>Priority: Critical
> Fix For: 2.0
>
>
> Recently added test 
> GridCacheAbstractFullApiSelfTest.testInvokeAllMultithreaded fails on TC in 
> various configurations with transactional cache.
> Example of failure 
> GridCacheReplicatedOffHeapTieredMultiNodeFullApiSelfTest.testInvokeAllMultithreaded:
> {noformat}
> junit.framework.AssertionFailedError: expected:<2> but was:<10868>
> at junit.framework.Assert.fail(Assert.java:57)
> at junit.framework.Assert.failNotEquals(Assert.java:329)
> at junit.framework.Assert.assertEquals(Assert.java:78)
> at junit.framework.Assert.assertEquals(Assert.java:234)
> at junit.framework.Assert.assertEquals(Assert.java:241)
> at junit.framework.TestCase.assertEquals(TestCase.java:409)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAbstractFullApiSelfTest.testInvokeAllMultithreaded(GridCacheAbstractFullApiSelfTest.java:342)
> at sun.reflect.GeneratedMethodAccessor96.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at junit.framework.TestCase.runTest(TestCase.java:176)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:1803)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:118)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest$4.run(GridAbstractTest.java:1718)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4380) Cache invoke calls can be lost

2016-12-07 Thread Semen Boikov (JIRA)
Semen Boikov created IGNITE-4380:


 Summary: Cache invoke calls can be lost
 Key: IGNITE-4380
 URL: https://issues.apache.org/jira/browse/IGNITE-4380
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Semen Boikov
Assignee: Semen Boikov
Priority: Critical
 Fix For: 2.0


Recently added test GridCacheAbstractFullApiSelfTest.testInvokeAllMultithreaded 
fails on TC in various configurations with transactional cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-910) Assert with near-only tx cache: Cannot have more then one ready near-local candidate

2016-12-07 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov resolved IGNITE-910.
-
Resolution: Fixed
  Assignee: (was: Alexey Goncharuk)

Test IgniteCacheNearOnlyTxTest does not fail now, close this issue as fixed.

> Assert with near-only tx cache: Cannot have more then one ready near-local 
> candidate
> 
>
> Key: IGNITE-910
> URL: https://issues.apache.org/jira/browse/IGNITE-910
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Semen Boikov
>Priority: Critical
>
> Added test IgniteCacheNearOnlyTxTest.testNearOnlyPutMultithreaded. Test pass, 
> but there are many errors in the log:
> {noformat}
> java.lang.AssertionError: Cannot have more then one ready near-local 
> candidate [c=GridCacheMvccCandidate 
> [nodeId=1009df79-3666-4aa3-8136-0b9afbb16001, ver=GridCacheVersion 
> [topVer=43228250, nodeOrderDrId=2, globalTime=1431748254111, 
> order=1431748248330], timeout=0, ts=1431748254106, threadId=181, id=282, 
> topVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0], reentry=null, 
> otherNodeId=null, otherVer=GridCacheVersion [topVer=43228250, 
> nodeOrderDrId=1, globalTime=1431748254113, order=1431748248332], 
> mappedDhtNodes=null, mappedNearNodes=null, ownerVer=null, 
> key=KeyCacheObjectImpl [val=1, hasValBytes=true], 
> masks=local=1|owner=0|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=0|near_local=1|removed=0,
>  prevVer=null, nextVer=null], cand=GridCacheMvccCandidate 
> [nodeId=1009df79-3666-4aa3-8136-0b9afbb16001, ver=GridCacheVersion 
> [topVer=43228250, nodeOrderDrId=2, globalTime=1431748254110, 
> order=1431748248323], timeout=0, ts=1431748254106, threadId=182, id=279, 
> topVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0], reentry=null, 
> otherNodeId=null, otherVer=GridCacheVersion [topVer=43228250, 
> nodeOrderDrId=1, globalTime=1431748254112, order=1431748248328], 
> mappedDhtNodes=null, mappedNearNodes=null, ownerVer=null, 
> key=KeyCacheObjectImpl [val=1, hasValBytes=true], 
> masks=local=1|owner=0|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=0|near_local=1|removed=0,
>  prevVer=null, nextVer=null], mvcc=GridCacheMvcc 
> [locs=[GridCacheMvccCandidate [nodeId=1009df79-3666-4aa3-8136-0b9afbb16001, 
> ver=GridCacheVersion [topVer=43228250, nodeOrderDrId=2, 
> globalTime=1431748254111, order=1431748248331], timeout=0, ts=1431748254106, 
> threadId=184, id=283, topVer=AffinityTopologyVersion [topVer=-1, 
> minorTopVer=0], reentry=null, otherNodeId=null, otherVer=GridCacheVersion 
> [topVer=43228250, nodeOrderDrId=1, globalTime=1431748254113, 
> order=1431748248333], mappedDhtNodes=null, mappedNearNodes=null, 
> ownerVer=null, key=KeyCacheObjectImpl [val=1, hasValBytes=true], 
> masks=local=1|owner=1|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=0|near_local=1|removed=0,
>  prevVer=null, nextVer=null], GridCacheMvccCandidate 
> [nodeId=1009df79-3666-4aa3-8136-0b9afbb16001, ver=GridCacheVersion 
> [topVer=43228250, nodeOrderDrId=2, globalTime=1431748254111, 
> order=1431748248330], timeout=0, ts=1431748254106, threadId=181, id=282, 
> topVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0], reentry=null, 
> otherNodeId=null, otherVer=GridCacheVersion [topVer=43228250, 
> nodeOrderDrId=1, globalTime=1431748254113, order=1431748248332], 
> mappedDhtNodes=null, mappedNearNodes=null, ownerVer=null, 
> key=KeyCacheObjectImpl [val=1, hasValBytes=true], 
> masks=local=1|owner=0|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=0|near_local=1|removed=0,
>  prevVer=null, nextVer=null], GridCacheMvccCandidate 
> [nodeId=1009df79-3666-4aa3-8136-0b9afbb16001, ver=GridCacheVersion 
> [topVer=43228250, nodeOrderDrId=2, globalTime=1431748254110, 
> order=1431748248323], timeout=0, ts=1431748254106, threadId=182, id=279, 
> topVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0], reentry=null, 
> otherNodeId=null, otherVer=GridCacheVersion [topVer=43228250, 
> nodeOrderDrId=1, globalTime=1431748254112, order=1431748248328], 
> mappedDhtNodes=null, mappedNearNodes=null, ownerVer=null, 
> key=KeyCacheObjectImpl [val=1, hasValBytes=true], 
> masks=local=1|owner=0|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=0|near_local=1|removed=0,
>  prevVer=null, nextVer=null], GridCacheMvccCandidate 
> [nodeId=1009df79-3666-4aa3-8136-0b9afbb16001, ver=GridCacheVersion 
> [topVer=43228250, nodeOrderDrId=2, globalTime=1431748254110, 
> order=1431748248325], timeout=0, ts=1431748254106, threadId=180, id=280, 
> topVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0], reentry=null, 
> otherNodeId=null, otherVer=null, mappedDhtNodes=null, mappedNearNodes=null, 
> ownerVer=null, key=KeyCacheObjectImpl [val=1, hasValBytes=true], 
> 

[jira] [Closed] (IGNITE-910) Assert with near-only tx cache: Cannot have more then one ready near-local candidate

2016-12-07 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov closed IGNITE-910.
---

> Assert with near-only tx cache: Cannot have more then one ready near-local 
> candidate
> 
>
> Key: IGNITE-910
> URL: https://issues.apache.org/jira/browse/IGNITE-910
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Semen Boikov
>Priority: Critical
>
> Added test IgniteCacheNearOnlyTxTest.testNearOnlyPutMultithreaded. Test pass, 
> but there are many errors in the log:
> {noformat}
> java.lang.AssertionError: Cannot have more then one ready near-local 
> candidate [c=GridCacheMvccCandidate 
> [nodeId=1009df79-3666-4aa3-8136-0b9afbb16001, ver=GridCacheVersion 
> [topVer=43228250, nodeOrderDrId=2, globalTime=1431748254111, 
> order=1431748248330], timeout=0, ts=1431748254106, threadId=181, id=282, 
> topVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0], reentry=null, 
> otherNodeId=null, otherVer=GridCacheVersion [topVer=43228250, 
> nodeOrderDrId=1, globalTime=1431748254113, order=1431748248332], 
> mappedDhtNodes=null, mappedNearNodes=null, ownerVer=null, 
> key=KeyCacheObjectImpl [val=1, hasValBytes=true], 
> masks=local=1|owner=0|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=0|near_local=1|removed=0,
>  prevVer=null, nextVer=null], cand=GridCacheMvccCandidate 
> [nodeId=1009df79-3666-4aa3-8136-0b9afbb16001, ver=GridCacheVersion 
> [topVer=43228250, nodeOrderDrId=2, globalTime=1431748254110, 
> order=1431748248323], timeout=0, ts=1431748254106, threadId=182, id=279, 
> topVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0], reentry=null, 
> otherNodeId=null, otherVer=GridCacheVersion [topVer=43228250, 
> nodeOrderDrId=1, globalTime=1431748254112, order=1431748248328], 
> mappedDhtNodes=null, mappedNearNodes=null, ownerVer=null, 
> key=KeyCacheObjectImpl [val=1, hasValBytes=true], 
> masks=local=1|owner=0|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=0|near_local=1|removed=0,
>  prevVer=null, nextVer=null], mvcc=GridCacheMvcc 
> [locs=[GridCacheMvccCandidate [nodeId=1009df79-3666-4aa3-8136-0b9afbb16001, 
> ver=GridCacheVersion [topVer=43228250, nodeOrderDrId=2, 
> globalTime=1431748254111, order=1431748248331], timeout=0, ts=1431748254106, 
> threadId=184, id=283, topVer=AffinityTopologyVersion [topVer=-1, 
> minorTopVer=0], reentry=null, otherNodeId=null, otherVer=GridCacheVersion 
> [topVer=43228250, nodeOrderDrId=1, globalTime=1431748254113, 
> order=1431748248333], mappedDhtNodes=null, mappedNearNodes=null, 
> ownerVer=null, key=KeyCacheObjectImpl [val=1, hasValBytes=true], 
> masks=local=1|owner=1|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=0|near_local=1|removed=0,
>  prevVer=null, nextVer=null], GridCacheMvccCandidate 
> [nodeId=1009df79-3666-4aa3-8136-0b9afbb16001, ver=GridCacheVersion 
> [topVer=43228250, nodeOrderDrId=2, globalTime=1431748254111, 
> order=1431748248330], timeout=0, ts=1431748254106, threadId=181, id=282, 
> topVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0], reentry=null, 
> otherNodeId=null, otherVer=GridCacheVersion [topVer=43228250, 
> nodeOrderDrId=1, globalTime=1431748254113, order=1431748248332], 
> mappedDhtNodes=null, mappedNearNodes=null, ownerVer=null, 
> key=KeyCacheObjectImpl [val=1, hasValBytes=true], 
> masks=local=1|owner=0|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=0|near_local=1|removed=0,
>  prevVer=null, nextVer=null], GridCacheMvccCandidate 
> [nodeId=1009df79-3666-4aa3-8136-0b9afbb16001, ver=GridCacheVersion 
> [topVer=43228250, nodeOrderDrId=2, globalTime=1431748254110, 
> order=1431748248323], timeout=0, ts=1431748254106, threadId=182, id=279, 
> topVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0], reentry=null, 
> otherNodeId=null, otherVer=GridCacheVersion [topVer=43228250, 
> nodeOrderDrId=1, globalTime=1431748254112, order=1431748248328], 
> mappedDhtNodes=null, mappedNearNodes=null, ownerVer=null, 
> key=KeyCacheObjectImpl [val=1, hasValBytes=true], 
> masks=local=1|owner=0|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=0|near_local=1|removed=0,
>  prevVer=null, nextVer=null], GridCacheMvccCandidate 
> [nodeId=1009df79-3666-4aa3-8136-0b9afbb16001, ver=GridCacheVersion 
> [topVer=43228250, nodeOrderDrId=2, globalTime=1431748254110, 
> order=1431748248325], timeout=0, ts=1431748254106, threadId=180, id=280, 
> topVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0], reentry=null, 
> otherNodeId=null, otherVer=null, mappedDhtNodes=null, mappedNearNodes=null, 
> ownerVer=null, key=KeyCacheObjectImpl [val=1, hasValBytes=true], 
> masks=local=1|owner=0|ready=0|reentry=0|used=0|tx=1|single_implicit=0|dht_local=0|near_local=1|removed=0,
>  prevVer=null, nextVer=null], GridCacheMvccCandidate 
>