[jira] [Created] (IGNITE-3233) need to optimize injections

2016-06-01 Thread Yakov Zhdanov (JIRA)
Yakov Zhdanov created IGNITE-3233:
-

 Summary: need to optimize injections
 Key: IGNITE-3233
 URL: https://issues.apache.org/jira/browse/IGNITE-3233
 Project: Ignite
  Issue Type: Improvement
Reporter: Yakov Zhdanov
Assignee: Semen Boikov
Priority: Blocker
 Fix For: 1.7


{noformat}
I want to check how this closure performs vs empty one:

ignite.compute().affinityCall(xx, xx, new IgniteCallable() {
@IgniteInstanceResource
Ignite ignite;

@SpringApplicationContextResource
ApplicationContext ctx;

Object bean1;
Object bean2;


@Override public Object call() throws Exception {
bean1 = ctx.getBean(bean1class);
bean2 = ctx.getBean(bean2class);

return null;
}
});
{noformat}

Closure above is 3 times slower than Noop closure. Injections should be 
optimized.

I see the following options:
# Annotations
## Introduce SpringAware annotation and annotate each object that will need 
injection including SPI and internal stuff
## Support Spring Autowire annotation.
## I am not sure about the approach. We can use ApplicationContext.autowire() 
or generate and compile code that will do injections.
# Interfaces
## IgniteAware
## Spring ApplicationContext aware
## ...

Implementor should suggest and back solution with microbenchmarks.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3231) GridCachePartitionedNearDisabledFairAffinityMultiNodeFullApiSelfTest must either be declared abstract or implement method 'gridCount()'

2016-06-01 Thread Dmitriy Setrakyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Setrakyan updated IGNITE-3231:
--
Assignee: Anton Vinogradov

> GridCachePartitionedNearDisabledFairAffinityMultiNodeFullApiSelfTest must 
> either be  declared  abstract or implement  method  'gridCount()'
> ---
>
> Key: IGNITE-3231
> URL: https://issues.apache.org/jira/browse/IGNITE-3231
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6, 1.7
>Reporter: Biao Ma
>Assignee: Anton Vinogradov
> Fix For: 1.7
>
>
> It seems that here missing the implement method `instanceCout()`, so just add 
> it should work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3231) GridCachePartitionedNearDisabledFairAffinityMultiNodeFullApiSelfTest must either be declared abstract or implement method 'gridCount()'

2016-06-01 Thread Dmitriy Setrakyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Setrakyan updated IGNITE-3231:
--
Fix Version/s: (was: 1.6)
   1.7

> GridCachePartitionedNearDisabledFairAffinityMultiNodeFullApiSelfTest must 
> either be  declared  abstract or implement  method  'gridCount()'
> ---
>
> Key: IGNITE-3231
> URL: https://issues.apache.org/jira/browse/IGNITE-3231
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6, 1.7
>Reporter: Biao Ma
> Fix For: 1.7
>
>
> It seems that here missing the implement method `instanceCout()`, so just add 
> it should work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3231) GridCachePartitionedNearDisabledFairAffinityMultiNodeFullApiSelfTest must either be declared abstract or implement method 'gridCount()'

2016-06-01 Thread Biao Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biao Ma updated IGNITE-3231:

Summary: 
GridCachePartitionedNearDisabledFairAffinityMultiNodeFullApiSelfTest must 
either be  declared  abstract or implement  method  'gridCount()'  (was: 
GridCachePartitionedNearDisabledFairAffinityMultiNodeFullApiSelfTest must 
either be  declared  abstract or implement  method  'instanceCount()')

> GridCachePartitionedNearDisabledFairAffinityMultiNodeFullApiSelfTest must 
> either be  declared  abstract or implement  method  'gridCount()'
> ---
>
> Key: IGNITE-3231
> URL: https://issues.apache.org/jira/browse/IGNITE-3231
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6, 1.7
>Reporter: Biao Ma
> Fix For: 1.6
>
>
> It seems that here missing the implement method `instanceCout()`, so just add 
> it should work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3231) GridCachePartitionedNearDisabledFairAffinityMultiNodeFullApiSelfTest must either be declared abstract or implement method 'instanceCount()'

2016-06-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311716#comment-15311716
 ] 

ASF GitHub Bot commented on IGNITE-3231:


GitHub user f7753 opened a pull request:

https://github.com/apache/ignite/pull/771

IGNITE-3231 add the unimplement method

`GridCachePartitionedNearDisabledFairAffinityMultiNodeFullApiSelfTest` must 
either be declared abstract or implement method `gridCount()`



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/f7753/ignite add_method_gridCount

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/771.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #771


commit 95de4cf16587da76762576f650a82ad957c17f7b
Author: MaBiao 
Date:   2016-06-02T04:31:52Z

add the unimplement method




> GridCachePartitionedNearDisabledFairAffinityMultiNodeFullApiSelfTest must 
> either be  declared  abstract or implement  method  'instanceCount()'
> ---
>
> Key: IGNITE-3231
> URL: https://issues.apache.org/jira/browse/IGNITE-3231
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6, 1.7
>Reporter: Biao Ma
> Fix For: 1.6
>
>
> It seems that here missing the implement method `instanceCout()`, so just add 
> it should work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3231) GridCachePartitionedNearDisabledFairAffinityMultiNodeFullApiSelfTest must either be declared abstract or implement method 'instanceCount()'

2016-06-01 Thread Biao Ma (JIRA)
Biao Ma created IGNITE-3231:
---

 Summary: 
GridCachePartitionedNearDisabledFairAffinityMultiNodeFullApiSelfTest must 
either be  declared  abstract or implement  method  'instanceCount()'
 Key: IGNITE-3231
 URL: https://issues.apache.org/jira/browse/IGNITE-3231
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.6, 1.7
Reporter: Biao Ma
 Fix For: 1.6


It seems that here missing the implement method `instanceCout()`, so just add 
it should work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2879) ODBC: Add support for Decimal type (SQL_NUMERIC_STRUCT).

2016-06-01 Thread Igor Sapego (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311133#comment-15311133
 ] 

Igor Sapego commented on IGNITE-2879:
-

In the end, I decided to implement my own solution which is not based on Java 
classes but is compatible with them. It conststs of two classes - 
{{BigInteger}} and {{Decimal}}. {{BigInteger}} class implements most of the 
functionality, including multiplication, division, pow and so on, while the 
{{Decimal}} class implemented as an {{BigInteger}} with a scale and only deals 
with operations that could affect the scale of the decimal number. Only minimal 
set of methods have been implemented which is needed to be able to deal with 
adjustment of the scale of the Decimals as required by the ODBC. This set 
contains of {{SetScale()}}, {{GetScale()}}, {{GetPrecision()}} and input/output 
operators for the {{Decimal}} class. All other methods have been implemented in 
order to achieve this functionality.

> ODBC: Add support for Decimal type (SQL_NUMERIC_STRUCT).
> 
>
> Key: IGNITE-2879
> URL: https://issues.apache.org/jira/browse/IGNITE-2879
> Project: Ignite
>  Issue Type: Task
>  Components: odbc
>Affects Versions: 1.5.0.final
>Reporter: Igor Sapego
>Assignee: Igor Sapego
> Fix For: 1.7
>
>
> We need reasonable Decimal type support at least for the ODBC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2680) Terminating running SQL queries

2016-06-01 Thread Alexei Scherbakov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310969#comment-15310969
 ] 

Alexei Scherbakov commented on IGNITE-2680:
---

Added 3 tests for the first use case:
1) Query is cancelled during execution phase ( resultset is not ready yet )
2) Query is cancelled after completion
3) Query is cancelled while fetching pages to initiator node.

Working branch: https://github.com/ascherbakoff/ignite/tree/ignite-2680

> Terminating running SQL queries
> ---
>
> Key: IGNITE-2680
> URL: https://issues.apache.org/jira/browse/IGNITE-2680
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.5.0.final
>Reporter: Denis Magda
>Assignee: Alexei Scherbakov
>  Labels: important
>
> If to start a long running SQL query over a huge cache will millions of 
> entries there should be a way terminate it. Even if {{QueryCursor}} is closed 
> the query won't be cancelled consuming available resources.
> There should be a way to close a query having using an object that is related 
> to it. Seems that ideally we can use {{QueryCursor.close()}} method for that;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3175) BigDecimal fields are not supported if query is executed from IgniteRDD

2016-06-01 Thread Valentin Kulichenko (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310895#comment-15310895
 ] 

Valentin Kulichenko commented on IGNITE-3175:
-

I'm reopening the ticket, because looks like {{BigDecimal}} is not the only 
unsupported type. The same exception fails at least for {{java.sql.Date}}.

Taras, as you were fixing this, can you please take a look and check what other 
types can be problematic?

> BigDecimal fields are not supported if query is executed from IgniteRDD
> ---
>
> Key: IGNITE-3175
> URL: https://issues.apache.org/jira/browse/IGNITE-3175
> Project: Ignite
>  Issue Type: Bug
>  Components: Ignite RDD
>Affects Versions: 1.5.0.final
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
> Fix For: 1.7
>
>
> If one of the fields participating in the query is {{BigDecimal}}, the query 
> will fail when executed from {{IgniteRDD}} with the following error:
> {noformat}
> scala.MatchError: 1124757 (of class java.math.BigDecimal)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:255)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
>   at 
> org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
>   at 
> org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>   at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.processInputs(TungstenAggregationIterator.scala:505)
>   at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.(TungstenAggregationIterator.scala:686)
>   at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)
>   at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>   at org.apache.spark.scheduler.Task.run(Task.scala:89)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Most likely this is caused by the fact that {{IgniteRDD.dataType()}} method 
> doesn't honor {{BigDecimal}} and returns {{StructType}} by default. We should 
> fix this and check other possible types as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3230) External addresses are not registered in IP finder

2016-06-01 Thread Valentin Kulichenko (JIRA)
Valentin Kulichenko created IGNITE-3230:
---

 Summary: External addresses are not registered in IP finder
 Key: IGNITE-3230
 URL: https://issues.apache.org/jira/browse/IGNITE-3230
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 1.6
Reporter: Valentin Kulichenko
 Fix For: 1.7


If a user configures {{AddressResolver}}, addresses returned by it are not 
registered in shared IP finders. So the only way to configure discovery in case 
is using static IP finder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3207) Rename IgniteConfiguration.gridName

2016-06-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310500#comment-15310500
 ] 

ASF GitHub Bot commented on IGNITE-3207:


Github user f7753 closed the pull request at:

https://github.com/apache/ignite/pull/761


> Rename IgniteConfiguration.gridName
> ---
>
> Key: IGNITE-3207
> URL: https://issues.apache.org/jira/browse/IGNITE-3207
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.1.4
>Reporter: Pavel Tupitsyn
>Assignee: Biao Ma
> Fix For: 1.7
>
>
> We have got a TON of questions on gridName property. Everyone thinks that 
> clusters are formed based on the gridName, that is, nodes with the same grid 
> name will join one cluster, and nodes with a different name will be in a 
> separate cluster.
> Let's do the following:
> * Deprecate IgniteConfiguration.gridName
> * Add IgniteConfiguration.localInstanceName
> * Rename related parameters in Ignition class (and other places, if present)
> * Update Javadoc: clearly state that this name only works locally and has no 
> effect on topology.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3229) un existing link in Class GridCacheStoreValueBytesSelfTest

2016-06-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310473#comment-15310473
 ] 

ASF GitHub Bot commented on IGNITE-3229:


GitHub user f7753 opened a pull request:

https://github.com/apache/ignite/pull/770

IGNITE-3229 un existing link in Class GridCacheStoreValueBytesSelfTest

Class GridCacheStoreValueBytesSelfTest' s annotation has linked to an un 
existing  method  
`org.apache.ignite.configuration.CacheConfiguration#isStoreValueBytes()`. 
Change it to 
`org.apache.ignite.configuration.CacheConfiguration#isStoreKeepBinary()`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/f7753/ignite fix_link

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/770.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #770


commit d14fdfcf7a4cc65fb21084b974ee6fdba640a458
Author: MaBiao 
Date:   2016-06-01T15:20:27Z

fix the unexisted link isStoreValueBytes to isStoreKeepBinary




> un existing link in Class GridCacheStoreValueBytesSelfTest
> --
>
> Key: IGNITE-3229
> URL: https://issues.apache.org/jira/browse/IGNITE-3229
> Project: Ignite
>  Issue Type: Bug
>Reporter: Biao Ma
>
> Class GridCacheStoreValueBytesSelfTest' s annotation has linked to an un 
> existing  method  
> `org.apache.ignite.configuration.CacheConfiguration#isStoreValueBytes()`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3229) un existing link in Class GridCacheStoreValueBytesSelfTest

2016-06-01 Thread Biao Ma (JIRA)
Biao Ma created IGNITE-3229:
---

 Summary: un existing link in Class GridCacheStoreValueBytesSelfTest
 Key: IGNITE-3229
 URL: https://issues.apache.org/jira/browse/IGNITE-3229
 Project: Ignite
  Issue Type: Bug
Reporter: Biao Ma


Class GridCacheStoreValueBytesSelfTest' s annotation has linked to an un 
existing  method  
`org.apache.ignite.configuration.CacheConfiguration#isStoreValueBytes()`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3228) Hadoop: Invalid page allocation logic inside HadoopMultimapBase

2016-06-01 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-3228:
---

 Summary: Hadoop: Invalid page allocation logic inside 
HadoopMultimapBase
 Key: IGNITE-3228
 URL: https://issues.apache.org/jira/browse/IGNITE-3228
 Project: Ignite
  Issue Type: Task
  Components: hadoop
Affects Versions: 1.6
Reporter: Vladimir Ozerov
Assignee: Vladimir Ozerov
Priority: Critical
 Fix For: 1.7


See {{HadoopMultimapBase$AddressBase.allocateNextPage()}} method. 
It constantly allocate more and more pages, but doesn't release previous 
chunks. 

As a result memory consumption constantly grows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3227) IgniteCache: add method to calculate size per partition

2016-06-01 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-3227:
---

 Summary: IgniteCache: add method to calculate size per partition
 Key: IGNITE-3227
 URL: https://issues.apache.org/jira/browse/IGNITE-3227
 Project: Ignite
  Issue Type: Improvement
Reporter: Denis Magda


It makes sense to add size calculation per partition. Actually the following 
methods should be added to the {{IgniteCache}} API.

{code}
public int size(int partition, CachePeekMode... peekModes) throws 
CacheException;

public int localSize(int partition, CachePeekMode... peekModes);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3227) IgniteCache: add method to calculate size per partition

2016-06-01 Thread Denis Magda (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda updated IGNITE-3227:

Labels: community important  (was: )

> IgniteCache: add method to calculate size per partition
> ---
>
> Key: IGNITE-3227
> URL: https://issues.apache.org/jira/browse/IGNITE-3227
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Magda
>  Labels: community, important
>
> It makes sense to add size calculation per partition. Actually the following 
> methods should be added to the {{IgniteCache}} API.
> {code}
> public int size(int partition, CachePeekMode... peekModes) throws 
> CacheException;
> public int localSize(int partition, CachePeekMode... peekModes);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-01 Thread Taras Ledkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov updated IGNITE-2310:
-
Due Date: 7/Jun/16

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-01 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov updated IGNITE-2310:
-
Description: 
Partition of a key passed to {{affinityRun}} must be located on the affinity 
node when a compute job is being sent to the node. The partition has to be 
locked on the cache until the compute job is being executed. This will let to 
execute queries safely (Scan or local SQL) over the data that is located 
locally in the locked partition.

In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
and {{affinityRun}} methods that accept list of caches which partitions have to 
be locked at the time a compute task is being executed.

Test cases to validate the functionality:

1) local SQL query over data located in a concrete partition in multple caches.
- create cache Organisation cache and create Persons cache.
- collocate Persons by 'organisationID';
- send {{affinityRun}} using 'organisationID' as an affinity key and passing 
Organisation and Persons caches' names to the method to be sure that the 
partition will be locked on caches;
- execute local SQL query "SELECT * FROM Persons as p, Organisation as o WHERE 
p.orgId=o.id' on a changing topology. The result set must be complete, the 
partition over which the query will be executed mustn't be moved to the other 
node. Due to affinity collocation the partition number will be the same for all 
Persons that belong to particular 'organisationID'

2) Scan Query over particular partition that is locked when {{affinityCall}} is 
executed.  

UPD (YZ May, 31)
# If closure arrives to node but partition is not there it should be silently 
failed over to current owner.
# I don't think user should provide list of caches. How about reserving only 
one partition, but evict partitions after all partitions in all caches (with 
same affinity function) on this node are locked for eviction. [~sboikov], can 
you please comment? It seems this should work faster for closures and will 
hardly affect rebalancing stuff.
# I would add method {{affinityCall(int partId, String cacheName, 
IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
affinity key in case I know partition before.

UPD (SB, June, 01)
Yakov, I think it is possible to implement this 'locking for evictions' 
approach, but personally I better like partitions reservation:
- approach with reservation already implemented and works fine in sql queries
- partition reservation is just CAS operation, if we need do ~10 reservation I 
think this will be negligible comparing to  job execution time
- now caches are rebalanced completely independently and changing this be 
complicated refactoring
- I see some difficulties how to understand that caches have same affinity. If 
user uses custom function should he implement 'equals'? For standard affinity 
functions user can set backup filter, what do in this case? should user 
implement 'equals' for filter? Even if affinity functions are the same cache 
configuration can have node filter, so affinity mapping will be different. 

  was:
Partition of a key passed to {{affinityRun}} must be located on the affinity 
node when a compute job is being sent to the node. The partition has to be 
locked on the cache until the compute job is being executed. This will let to 
execute queries safely (Scan or local SQL) over the data that is located 
locally in the locked partition.

In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
and {{affinityRun}} methods that accept list of caches which partitions have to 
be locked at the time a compute task is being executed.

Test cases to validate the functionality:

1) local SQL query over data located in a concrete partition in multple caches.
- create cache Organisation cache and create Persons cache.
- collocate Persons by 'organisationID';
- send {{affinityRun}} using 'organisationID' as an affinity key and passing 
Organisation and Persons caches' names to the method to be sure that the 
partition will be locked on caches;
- execute local SQL query "SELECT * FROM Persons as p, Organisation as o WHERE 
p.orgId=o.id' on a changing topology. The result set must be complete, the 
partition over which the query will be executed mustn't be moved to the other 
node. Due to affinity collocation the partition number will be the same for all 
Persons that belong to particular 'organisationID'

2) Scan Query over particular partition that is locked when {{affinityCall}} is 
executed.  

UPD (YZ May, 31)
# If closure arrives to node but partition is not there it should be silently 
failed over to current owner.
# I don't think user should provide list of caches. How about reserving only 
one partition, but evict partitions after all partitions in all caches (with 
same affinity function) on this node are locked for eviction. [~sboikov], can 
you please comment? It seems 

[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-01 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310351#comment-15310351
 ] 

Taras Ledkov commented on IGNITE-2310:
--

Semen proposes to add {{partsToLock}} handling logic & job's retry behavior to 
the core logic: {{GridJobExecuteRequest}} and {{GridJobProcessor}}  instead of 
the create particular job wrapper for affinityRun.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2616) NonHeap memory usage metrics don't work as expected.

2016-06-01 Thread Denis Magda (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310340#comment-15310340
 ] 

Denis Magda commented on IGNITE-2616:
-

Vlad, I've reviewed your changes and left review notes in the pull-request. 
Please address them.

> NonHeap memory usage metrics don't work as expected.
> 
>
> Key: IGNITE-2616
> URL: https://issues.apache.org/jira/browse/IGNITE-2616
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 1.5.0.final
>Reporter: Vladimir Ershov
>Assignee: Vladislav Pyatkov
>Priority: Minor
> Attachments: ClusterMetricsOnCacheSelfTest.java
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> This simple code:
> {noformat}
> public static void main(String ... args) {
> MemoryMXBean mxBean = ManagementFactory.getMemoryMXBean();
> System.out.println(mxBean.getNonHeapMemoryUsage());
> GridUnsafeMemory uMem = new GridUnsafeMemory(1024L * 1024 * 1024 * 
> 3); //3GB
> System.out.println(mxBean.getNonHeapMemoryUsage());
> uMem.allocate(1024 * 1024 * 1024, true, false);
> System.out.println(mxBean.getNonHeapMemoryUsage());
> uMem.allocate(1024 * 1024 * 1024, true, true);
> System.out.println(mxBean.getNonHeapMemoryUsage());
> }
> {noformat}
> shows: 
> {noformat}
> init = 2555904(2496K) used = 4783352(4671K) committed = 8060928(7872K) max = 
> -1(-1K)
> init = 2555904(2496K) used = 5018704(4901K) committed = 8060928(7872K) max = 
> -1(-1K)
> init = 2555904(2496K) used = 5018960(4901K) committed = 8060928(7872K) max = 
> -1(-1K)
> init = 2555904(2496K) used = 5018960(4901K) committed = 8060928(7872K) max = 
> -1(-1K)
> {noformat}
> which means: offHeap metrics are incorrect. The  problem is: Apache Ignite 
> uses that MemoryMXBean  for collecting metrics, thus Apache Ignite offHeap 
> metrics are incorrect too. We should find the way to fix this, if there are 
> any.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations

2016-06-01 Thread Denis Magda (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310311#comment-15310311
 ] 

Denis Magda commented on IGNITE-2655:
-

[~v.pyatkov], I've reviewed your latest changes. Please see my comments in the 
pull-request. In short the code can fail or work improperly.

[~yzhdanov], I would use this predicate for backups only, leaving method's 
signature as is, considering that the primary is chosen by affinity function 
before. As you said if someone needs to control the location of primary he can 
write his own affinity function and most likely he will because "partition 
number" passed to the filter says nothing in regards to where this partition is 
physically located.

>> As I side note I would say that we are trying to force our users to do some 
>> programming. Does anyone have any idea on how to do this without code? How 
>> about supporting simple string expressions based on node attributes and/or 
>> ip addresses?

Presently I would fully rely on the programmable way because it's flexible and 
covers all the cases. If there is a demand for the thing you're talking about 
we can add an overloaded method that will except a kind of predicate.

[~dsetrakyan], presently the API is left the same for all the supported 
affinity function:
{{public void setAffinityBackupFilter(@Nullable IgniteBiPredicate affinityBackupFilter)}} 

where

the first parameter (ClusterNode) is a node being tested and the second list 
contains nodes that are already assigned for a partition (primary is always the 
first).

> AffinityFunction: primary and backup copies in different locations
> --
>
> Key: IGNITE-2655
> URL: https://issues.apache.org/jira/browse/IGNITE-2655
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> There is a use case when primary and backup copies have to be located in 
> different racks, building, cities, etc.
> A simple scenario is the following. When nodes are started they will have 
> either "rack1" or "rack2" value in their attributes list and we will enforce 
> that the backups won't be selected among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that 
> will work perfectly for the scenario above but only for cases when number of 
> backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will 
> only guarantee that the primary is located in different location but will NOT 
> guarantee that all the backups are spread out across different locations as 
> well.
> So we need to provide an API that will allow to spread the primary and ALL 
> backups copies across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following 
> method
> {{AffinityBackupFilter.isAssignable(Node n, List assigned)}}
> Where n - potential backup to check, assigned - list of current partition 
> holders, 1st is primary
> {{AffinityBackupFilter}} will be set using 
> {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-01 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310183#comment-15310183
 ] 

Taras Ledkov commented on IGNITE-2310:
--

I guess the method {{affinityRun(String cacheName, Object affKey, 
IgniteRunnable job, Map partsToLock)}} is useful.
May be this method should be private for the class {{IgniteComputeImpl}}. In 
this case we could define public API more flexible.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3151) Using IgniteCountDownLatch sometimes drives to dead lock.

2016-06-01 Thread Vladislav Pyatkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310182#comment-15310182
 ] 

Vladislav Pyatkov commented on IGNITE-3151:
---

Description of change:
I drew attention:
1) {{await}} method creates local *CountDownLatch* and wait it 
({{internalLatch}}).
2) but {{onUpdate}} works without synchronization with initializing of local 
CountDownLatch ({{initializeLatch}}).
Hence {{onUpdate}} may will pass after count of latch will got and before 
return latch from *Callable* closure.

I added wait of creation ({{initLatch}}) of local latch if the process had 
started ({{initGuard}}) already and removed transaction which hold lock on 
count of latch key.

> Using IgniteCountDownLatch sometimes drives to dead lock.
> -
>
> Key: IGNITE-3151
> URL: https://issues.apache.org/jira/browse/IGNITE-3151
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
> Attachments: igniteBugShot.zip
>
>
> Run several serve node (recoment use count of CPU - 1).
> Wait update topology.
> Run client
> After some iteration exception will thrown (In my case it take place after 
> around 10K iteration).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310174#comment-15310174
 ] 

ASF GitHub Bot commented on IGNITE-2310:


GitHub user tledkov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/769

IGNITE-2310 Lock cache partition for affinityRun/affinityCall execution



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2310

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/769.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #769


commit 4b65ae906f43a70f914fb33fcf7a65108f39c2cb
Author: tledkov-gridgain 
Date:   2016-05-27T12:16:27Z

add test

commit 5c46507045da69d7613b57a528a82011fc383c65
Author: tledkov-gridgain 
Date:   2016-05-27T15:07:03Z

test to validate issue

commit a7f9edd6d05cef68b949993d920943fa8fdd5ee2
Author: tledkov-gridgain 
Date:   2016-05-30T13:25:10Z

Merge remote-tracking branch 'remotes/origin/master' into ignite-2310

commit 33524cf9089b1b743923faeb47f3b8421185c7fc
Author: tledkov-gridgain 
Date:   2016-06-01T11:45:08Z

IGNITE-2310 Lock cache partition for affinityRun/affinityCall execution: 
add method affinityRun(cacheName, affKey, job,  
Map 
partsToLock) ;




> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3226) Load test: iteration over cache partitions using scan queries and performing transactions

2016-06-01 Thread Denis Magda (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda updated IGNITE-3226:

Description: 
The following load test has to be added:
- two caches are created (Persons and Deposit). Deposits are collocated by 
Person;
- Persons and Deposits are constructed with {{BinaryObjectBuilder}} and in fact 
there won't be real classes for these data models;
- data is preloaded with data streamers into the server nodes;
- compute jobs are sent to the nodes that hold a particular partition;
- the logic of the job iterates over a partition of Persons cache using 
ScanQuery (local);
- for every returned Person we should execute a local SQL query getting all 
Person's deposits;
- for every returned deposits we have to perform a pessimistic repeatable-read 
transaction that will increase a deposit amount on some value.

As an example you can refer to 
https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/datagrid/query/ScanQueryExample.java


  was:
The following load test has to be added:
- two caches are created (Persons and Deposit). Deposits are collocated by 
Person;
- Persons and Deposits are constructed with {{BinaryObjectBuilder}} and in fact 
there won't be real classes for these data models;
- data is preloaded with data streamers into the server nodes;
- compute jobs are sent to the nodes that hold a particular partition;
- the logic of the job iterates over a partition of Deposits cache using 
ScanQuery (local);
- for every returned Deposit we should execute a local SQL query where Deposit 
will be joined with Persons;
- for every returned Deposit we have to perform a pessimistic repeatable-read 
transaction that will increase a deposit amount on some value.

As an example you can refer to 
https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/datagrid/query/ScanQueryExample.java



> Load test: iteration over cache partitions using scan queries and performing 
> transactions
> -
>
> Key: IGNITE-3226
> URL: https://issues.apache.org/jira/browse/IGNITE-3226
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 1.6
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
> Fix For: 1.7
>
>
> The following load test has to be added:
> - two caches are created (Persons and Deposit). Deposits are collocated by 
> Person;
> - Persons and Deposits are constructed with {{BinaryObjectBuilder}} and in 
> fact there won't be real classes for these data models;
> - data is preloaded with data streamers into the server nodes;
> - compute jobs are sent to the nodes that hold a particular partition;
> - the logic of the job iterates over a partition of Persons cache using 
> ScanQuery (local);
> - for every returned Person we should execute a local SQL query getting all 
> Person's deposits;
> - for every returned deposits we have to perform a pessimistic 
> repeatable-read transaction that will increase a deposit amount on some value.
> As an example you can refer to 
> https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/datagrid/query/ScanQueryExample.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3226) Load test: iteration over cache partitions using scan queries and performing transactions

2016-06-01 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-3226:
---

 Summary: Load test: iteration over cache partitions using scan 
queries and performing transactions
 Key: IGNITE-3226
 URL: https://issues.apache.org/jira/browse/IGNITE-3226
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 1.6
Reporter: Denis Magda
Assignee: Vladislav Pyatkov
 Fix For: 1.7


The following load test has to be added:
- two caches are created (Persons and Deposit). Deposits are collocated by 
Person;
- Persons and Deposits are constructed with {{BinaryObjectBuilder}} and in fact 
there won't be real classes for these data models;
- data is preloaded with data streamers into the server nodes;
- compute jobs are sent to the nodes that hold a particular partition;
- the logic of the job iterates over a partition of Deposits cache using 
ScanQuery (local);
- for every returned Deposit we should execute a local SQL query where Deposit 
will be joined with Persons;
- for every returned Deposit we have to perform a pessimistic repeatable-read 
transaction that will increase a deposit amount on some value.

As an example you can refer to 
https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/datagrid/query/ScanQueryExample.java




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-3225) Does not work preview in IGFS section of cluster

2016-06-01 Thread Vasiliy Sisko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasiliy Sisko resolved IGNITE-3225.
---
Resolution: Fixed
  Assignee: Pavel Konstantinov  (was: Vasiliy Sisko)

Fixed generation of IGFS

> Does not work preview in IGFS section of cluster
> 
>
> Key: IGNITE-3225
> URL: https://issues.apache.org/jira/browse/IGNITE-3225
> Project: Ignite
>  Issue Type: Sub-task
>  Components: wizards
>Affects Versions: 1.7
>Reporter: Vasiliy Sisko
>Assignee: Pavel Konstantinov
> Fix For: 1.7
>
>
> XML and Java preview is empty



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3225) Does not work preview in IGFS section of cluster

2016-06-01 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-3225:
-

 Summary: Does not work preview in IGFS section of cluster
 Key: IGNITE-3225
 URL: https://issues.apache.org/jira/browse/IGNITE-3225
 Project: Ignite
  Issue Type: Sub-task
  Components: wizards
Affects Versions: 1.7
Reporter: Vasiliy Sisko
Assignee: Vasiliy Sisko


XML and Java preview is empty



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3191) BinaryObjectBuilder: binary schema id depends on the order of fields addition

2016-06-01 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309970#comment-15309970
 ] 

Vladimir Ozerov commented on IGNITE-3191:
-

{{TreeMap}} is slower than {{LinkedHashMap}}, and we didn't consider many 
distinct field combinations as a common use case.

Moreover, even now I don't understand well why we should bother with changing 
everything from unordered to ordered structure.

> BinaryObjectBuilder: binary schema id depends on the order of fields addition
> -
>
> Key: IGNITE-3191
> URL: https://issues.apache.org/jira/browse/IGNITE-3191
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
> Fix For: 1.7
>
>
> Presently if an object is created using BinaryObjectBuilder then several 
> BinarySchemes can be generated for the same set of fields in case when fields 
> are added in a different order.
> This happens because {{LinkedHashMap}} is used underneath. However we should 
> rely on order-free structure like {{TreeMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-3224) Does not work Undo for General - Apache ZooKeeper - Retry policy

2016-06-01 Thread Vasiliy Sisko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasiliy Sisko resolved IGNITE-3224.
---
Resolution: Fixed
  Assignee: Pavel Konstantinov  (was: Dmitriyff)

Fixed

> Does not work Undo for General - Apache ZooKeeper - Retry policy 
> -
>
> Key: IGNITE-3224
> URL: https://issues.apache.org/jira/browse/IGNITE-3224
> Project: Ignite
>  Issue Type: Sub-task
>  Components: wizards
>Affects Versions: 1.7
>Reporter: Vasiliy Sisko
>Assignee: Pavel Konstantinov
> Fix For: 1.7
>
>
> # Select Apache ZooKeeper as discovery variant.
> # Configure any retry policy.
> # Save cluster.
> # Change configured retry policy.
> # Change retry policy and configure selected.
> # Undo General section.
> Restored only selected retry policy. Configuration of both retry policies 
> stay changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-01 Thread Semen Boikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309913#comment-15309913
 ] 

Semen Boikov commented on IGNITE-2310:
--

Alexei,
Reservation does not prevent rebalancing, it should prevent only partition 
eviction from old partition owner nodes. There should not be any issues: when 
new node joins it will start rebalancing, when rebalancing finishes then jobs 
will be mapped to this new node.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-01 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov updated IGNITE-2310:
-
Comment: was deleted

(was: Alexei,
Reservation does not prevent rebalancing, it should prevent only partition 
eviction from old partition owner nodes. There should not be any issues: when 
new node joins it will start rebalancing, when rebalancing finishes then jobs 
will be mapped to this new node.)

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2766) Cache instance is closed when client disconnects

2016-06-01 Thread Jon Ekdahl (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309904#comment-15309904
 ] 

Jon Ekdahl commented on IGNITE-2766:


The issue description refers to "special logic" but does not mention how this 
logic would be implemented. Is there a "known workaround" to this problem that 
users of Ignite 1.5.x can deploy?

> Cache instance is closed when client disconnects
> 
>
> Key: IGNITE-2766
> URL: https://issues.apache.org/jira/browse/IGNITE-2766
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 1.5.0.final
>Reporter: Valentin Kulichenko
> Fix For: 1.7
>
>
> In case client disconnects and reconnects after network timeout (i.e., with a 
> new ID), all cache instances acquired by this client are closed and are not 
> functional. User has to create a special logic to handle this case.
> Cache proxy should be able to handle this automatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3224) Does not work Undo for General - Apache ZooKeeper - Retry policy

2016-06-01 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-3224:
-

 Summary: Does not work Undo for General - Apache ZooKeeper - Retry 
policy 
 Key: IGNITE-3224
 URL: https://issues.apache.org/jira/browse/IGNITE-3224
 Project: Ignite
  Issue Type: Sub-task
  Components: wizards
Affects Versions: 1.7
Reporter: Vasiliy Sisko
Assignee: Dmitriyff


# Select Apache ZooKeeper as discovery variant.
# Configure any retry policy.
# Save cluster.
# Change configured retry policy.
# Change retry policy and configure selected.
# Undo General section.

Restored only selected retry policy. Configuration of both retry policies stay 
changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-3211) "Failed to reinitialize local partitions", "Failed to wait for completion of partition map exchange" errors during failover test

2016-06-01 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov reassigned IGNITE-3211:


Assignee: Semen Boikov

> "Failed to reinitialize local partitions", "Failed to wait for completion of 
> partition map exchange" errors during failover test
> 
>
> Key: IGNITE-3211
> URL: https://issues.apache.org/jira/browse/IGNITE-3211
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6
>Reporter: Ksenia Rybakova
>Assignee: Semen Boikov
> Fix For: 1.7
>
>
> "Failed to reinitialize local partitions (preloading will be stopped)" and 
> "Failed to wait for completion of partition map exchange (preloading will not 
> start)" errors occured during failover load test. Complete stack trace see 
> below.
> Load config: 
> - 1 client, 20 servers (5 servers per 1 host)
> - warmup 60 
> - duration 66h
> - preload 5M
> - key range 10M
> - operations: PUT PUT_ALL GET GET_ALL INVOKE INVOKE_ALL REMOVE REMOVE_ALL 
> PUT_IF_ABSENT REPLACE
> - backups count 3
> - 3 servers restart every 15 min with 30 sec step, pause between stop and 
> start 5min
> {noformat}
> [08:32:21,002][ERROR][exchange-worker-#83%null%][GridDhtPartitionsExchangeFuture]
>  Failed to reinitialize local partitions (preloading will be stopped): 
> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=39, 
> minorTopVer=1], nodeId=20ddc8b7, evt=DISCOVERY_CUSTOM_EVT]
> class org.apache.ignite.IgniteException: null
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:506)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:297)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.toString(IgniteTxLocalAdapter.java:3743)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.toString(GridDhtTxLocalAdapter.java:868)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.toString(GridDhtTxLocal.java:703)
> at java.lang.String.valueOf(String.java:2849)
> at java.lang.StringBuilder.append(StringBuilder.java:128)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.dumpPendingObjects(GridCachePartitionExchangeManager.java:1172)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.dumpDebugInfo(GridCachePartitionExchangeManager.java:1150)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.dumpPendingObjects(GridDhtPartitionsExchangeFuture.java:894)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.waitPartitionRelease(GridDhtPartitionsExchangeFuture.java:769)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:715)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:472)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1333)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: class org.apache.ignite.IgniteException: null
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:506)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:364)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxStateImpl.toString(IgniteTxStateImpl.java:443)
> at java.lang.String.valueOf(String.java:2849)
> at 
> org.apache.ignite.internal.util.GridStringBuilder.a(GridStringBuilder.java:101)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:474)
> ... 15 more
> Caused by: java.util.ConcurrentModificationException
> at 
> java.util.LinkedHashMap$LinkedHashIterator.nextEntry(LinkedHashMap.java:394)
> at java.util.LinkedHashMap$EntryIterator.next(LinkedHashMap.java:413)
> at java.util.LinkedHashMap$EntryIterator.next(LinkedHashMap.java:412)
> at java.util.AbstractMap.toString(AbstractMap.java:518)
> at java.lang.String.valueOf(String.java:2849)
> at 
>