[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2017-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892017#comment-15892017
 ] 

ASF GitHub Bot commented on IGNITE-2310:


Github user tledkov-gridgain closed the pull request at:

https://github.com/apache/ignite/pull/1550


> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.8
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2017-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871648#comment-15871648
 ] 

ASF GitHub Bot commented on IGNITE-2310:


GitHub user tledkov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/1550

IGNITE-2310: additional docs for affinity jobs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite 
ignite-2310-affinity-job-doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1550.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1550


commit 597f23e561def39831b87caadcb7706ee2ebe1bd
Author: tledkov-gridgain 
Date:   2017-02-17T10:24:00Z

IGNITE-2310: additional docs for affinity jobs




> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.8
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-08-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411839#comment-15411839
 ] 

ASF GitHub Bot commented on IGNITE-2310:


Github user tledkov-gridgain closed the pull request at:

https://github.com/apache/ignite/pull/919


> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-08-05 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410423#comment-15410423
 ] 

Dmitriy Setrakyan commented on IGNITE-2310:
---

# I don't think we should be using @NotNull annotations. @Nullable is enough to 
tell that the other one cannot be null.
# I think if {{null}} is passed, we should always delegate to single 
{{cacheName}} methods (we should add this delegation in code, just in case).
# I doubt the current approach will cause compilation issues - nobody should be 
using {{null}} for cache names anyway.


> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-08-05 Thread Alexey Goncharuk (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409117#comment-15409117
 ] 

Alexey Goncharuk commented on IGNITE-2310:
--

Few minor comments:
1) Remove '@Nullable' from FailoverContext#partition - the return type is 
primitive, also fix references to javadoc methods affinityRun()
2) Is there any reason why tests are added to indexing module? I think they 
should be in core.

Otherwise looks good!

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-08-04 Thread Denis Magda (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408211#comment-15408211
 ] 

Denis Magda commented on IGNITE-2310:
-

There must no be any compatibility related issues especially compile time ones. 
We need to find a way to avoid this situation.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-08-04 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407375#comment-15407375
 ] 

Taras Ledkov commented on IGNITE-2310:
--

The last version of the public API:
{code:title=IgniteCompute.java|borderStyle=solid}
public void affinityRun(@Nullable String cacheName, Object affKey, 
IgniteRunnable job) throws IgniteException;
public void affinityRun(@NotNull Collection cacheNames, Object affKey, 
IgniteRunnable job) throws IgniteException;
public void affinityRun(@NotNull Collection cacheNames, int partId, 
IgniteRunnable job) throws IgniteException;
public  R affinityCall(@Nullable String cacheName, Object affKey, 
IgniteCallable job) throws IgniteException;
public  R affinityCall(@NotNull Collection cacheNames, Object 
affKey, IgniteCallable job) throws IgniteException;
public  R affinityCall(@NotNull Collection cacheNames, int partId, 
IgniteCallable job) throws IgniteException;
{code}
Ambiguity is fixed on the tests side. In case this API will be approved we 
should notify customers about potential compilation fails.
I'll sent email on dev-list after approving.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-08-03 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406117#comment-15406117
 ] 

Taras Ledkov commented on IGNITE-2310:
--

[Test 
results|http://149.202.210.143:8111/viewQueued.html?itemId=298152=queuedBuildOverviewTab]
 after merge with 1.6.4

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-08-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15405787#comment-15405787
 ] 

ASF GitHub Bot commented on IGNITE-2310:


GitHub user tledkov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/919

IGNITE-2310 Lock cache partition for affinityRun/affinityCall execution



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2310-new

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/919.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #919


commit 485f4bf6cd9289d8a5364fbe0e93e8204d6311fe
Author: tledkov-gridgain 
Date:   2016-05-27T12:16:27Z

IGNITE-2310 Lock cache partition for affinityRun/affinityCall execution




> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-08-01 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402018#comment-15402018
 ] 

Taras Ledkov commented on IGNITE-2310:
--

[All tests results|http://149.202.210.143:8111/viewLog.html?buildId=292794] on 
the branch

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-08-01 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15401824#comment-15401824
 ] 

Taras Ledkov commented on IGNITE-2310:
--

All [tests 
results|http://149.202.210.143:8111/viewQueued.html?itemId=296311=queuedBuildOverviewTab]
 for the branch.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-22 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389147#comment-15389147
 ] 

Taras Ledkov commented on IGNITE-2310:
--

The [pull request|https://github.com/apache/ignite/pull/857] corresponds to the 
ignite-2310 branch.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-21 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388250#comment-15388250
 ] 

Dmitriy Setrakyan commented on IGNITE-2310:
---

Taras, where is the patch? I don't see anything attached to this ticket and no 
PR from GitHub.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-20 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386214#comment-15386214
 ] 

Taras Ledkov commented on IGNITE-2310:
--

[Tests|http://149.202.210.143:8111/viewLog.html?buildId=285866=buildResultsDiv=IgniteTests_IgniteComputeAffinityRun]
 passed.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-20 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386167#comment-15386167
 ] 

Taras Ledkov commented on IGNITE-2310:
--

The patch is available at the branch 
[ignite-2310|https://github.com/gridgain/apache-ignite/tree/ignite-2310].
Please take a look.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-20 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386161#comment-15386161
 ] 

Taras Ledkov commented on IGNITE-2310:
--

The strange behavior of the JobProcessor is discovered.
Affinity task hangs when the grid is configured with the collision SPI that 
produce a lot of job rejection.
The root cause is the execution response isn't set from target node because 
there is a confusion with JobWorker instances in the CollisionContext.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-18 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15382472#comment-15382472
 ] 

Taras Ledkov commented on IGNITE-2310:
--

Refactoring, review issue:
- move retry logic from GridJobProcessor to GrigJobWorker because CollisionSpi 
can move the job to the under-utilized node;
- simplify test, add tests with JobStealingCollisionSpi;
- use cache IDs instead of cache names at the JobRequest.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-12 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372989#comment-15372989
 ] 

Taras Ledkov commented on IGNITE-2310:
--

Needs to add tests to check cache operations by the affinity run/call with 
partition reservation.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-12 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372988#comment-15372988
 ] 

Taras Ledkov commented on IGNITE-2310:
--

Because when the cacheNames is the 1st parameter the methods are ambiguous:
The call {{affinityRun(null, 0, job);}} is appropriate for all signatures:
{code:title=IgniteCompute.java|borderStyle=solid}
public void affinityRun(@Nullable String cacheName, Object affKey, 
IgniteRunnable job) throws IgniteException;
public void affinityRun(@NotNull Collection cacheNames, Object affKey, 
IgniteRunnable job) throws IgniteException;
{code}


> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-11 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15370754#comment-15370754
 ] 

Dmitriy Setrakyan commented on IGNITE-2310:
---

I prefer to have cacheNames as the 1st parameter in all cases, to be symmetric 
with other methods. Why move them?

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-11 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15370636#comment-15370636
 ] 

Taras Ledkov commented on IGNITE-2310:
--

I propose to use the signature {{affinityRun(int partId, Collection 
cacheNames, IgniteRunnable job)}} for the new method to prevent ambiguous.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-11 Thread Yakov Zhdanov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15370485#comment-15370485
 ] 

Yakov Zhdanov commented on IGNITE-2310:
---

Let's agree on the following.

1. We leave current methods as they are
{noformat}
void affinityRun(@Nullable String cacheName, Object affKey, IgniteRunnable job, 
Collection extraCaches) throws IgniteException;
 R affinityCall(@Nullable String cacheName, Object affKey, IgniteCallable 
job, Collection extraCaches) throws IgniteException;
{noformat}

2. We add these
{noformat}
void affinityRun(Collection cacheNames, Object affKey, IgniteRunnable 
job) throws IgniteException;
 R affinityCall(Collection cacheNames, Object affKey, 
IgniteCallable job) throws IgniteException;
void affinityRun(Collection cacheNames, int partId, IgniteRunnable job) 
throws IgniteException;
 R affinityCall(Collection cacheNames, int partId, IgniteCallable 
job) throws IgniteException;
{noformat}

3. I don't think we should add methods with {{partId}} and only cache since it 
will overload the API. We will be able to add them at any point.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-11 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15370356#comment-15370356
 ] 

Taras Ledkov commented on IGNITE-2310:
--

{code:title=IgniteCompute.java|borderStyle=solid}
void affinityRun(Collection cacheNames, int partId, IgniteRunnable job) 
throws IgniteException;{code}
OK with me. As far as I know Semen means this version of the new API is the 
most useful.


> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-11 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15370353#comment-15370353
 ] 

Taras Ledkov commented on IGNITE-2310:
--

What cache is used to define partition in this case? The first of the 
{{cacheNames}}? 
Also empty {{cacheNames}} is an error case. 
The version with {{extraCaches}} means extension the overloaded method without 
{{extraCaches}}.

I'm OK with both version. Who has the final word?



> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-08 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368857#comment-15368857
 ] 

Dmitriy Setrakyan commented on IGNITE-2310:
---

In addition, we should add the following methods:
{code}
void affinityRun(Collection cacheNames, int partId, IgniteRunnable job) 
throws IgniteException;
 R affinityCall(Collection cacheNames, int partId, IgniteCallable 
job) throws IgniteException;
{code}

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-08 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368855#comment-15368855
 ] 

Dmitriy Setrakyan commented on IGNITE-2310:
---

The API looks very strange. Why not just change the 1st parameter to a 
collection of strings, like so:
{code}
void affinityRun(Collection cacheNames, Object affKey, IgniteRunnable 
job) throws IgniteException;
 R affinityCall(Collection cacheNames, Object affKey, 
IgniteCallable job) throws IgniteException;
{code}

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-07 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15366024#comment-15366024
 ] 

Taras Ledkov commented on IGNITE-2310:
--

Public API changes at the IgniteCompute:
two methods are added:
{code:title=IgniteCompute.java|borderStyle=solid}
void affinityRun(@Nullable String cacheName, Object affKey, IgniteRunnable 
job, Collection extraCaches) throws IgniteException;
 R affinityCall(@Nullable String cacheName, Object affKey, 
IgniteCallable job, Collection extraCaches) throws IgniteException;
{code}

These methods overload existing methods:
{code:title=IgniteCompute.java|borderStyle=solid}
void affinityRun(@Nullable String cacheName, Object affKey, IgniteRunnable 
job) throws IgniteException;
 R affinityCall(@Nullable String cacheName, Object affKey, 
IgniteCallable job) throws IgniteException;
{code}

The difference in the parameter {{Collection extraCaches}}. The 
partitions of these caches will be also reserved. The reserved partition 
similar to partition where affKey is stored. 
The partition reservation means the data from the caches *cacheName* and 
*extraCaches* stored at the specified partition will not be migrated from the 
node while the *job* is executed on the node.



> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-05 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15362694#comment-15362694
 ] 

Taras Ledkov commented on IGNITE-2310:
--

I think test with failed unmarshaling of a job's results doesn't make sense. 
Because job's results unmarshaling is executed on the master node but 
partitions are reserved on the job node. May be test with job results 
marshaling failure make sense?


> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-07-01 Thread Semen Boikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15358657#comment-15358657
 ] 

Semen Boikov commented on IGNITE-2310:
--

Reviewed, my comments:
- please add test for ScanQuery
- please add test for SqlQuery
- please add test for both affinityRun methods, for both affinityCall methods
- please add some tests to check that partitions are released after job 
execution in following scenarios: job complets normally, job throws exception, 
job throws Error, node which sent job request failed while job was running + 
the same case where job implements ComputeJobMasterLeaveAware, job 
unmarshalling fails
- ignite support CollisionSpi, please add some tests to check that affinityCall 
is not broken when CollisionSpi is used
- I looked at local sql query execution, indexing uses special 'backup filter' 
to filter out backup keys, now for this filter it uses current affinity 
version, probably this is causes test failure. To fix it you need to add 
affinity version which was used to map job  in request and pass this version to 
query backup filter (for this use can use some ThreadLocal context)

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-30 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15356842#comment-15356842
 ] 

Taras Ledkov commented on IGNITE-2310:
--

The patch is available at the branch 
[ignite-2310|https://github.com/gridgain/apache-ignite/tree/ignite-2310].

Please pay you attention to: 
* the IgniteCacheLockPartitionOnAffinityRunTest test.
The test testLockPartitionOnAffinityRunByDhtPartition is stable passed but the 
test testLockPartitionOnAffinityRunByLocalSql fails with stable 
reproducibility. Look like there are some issues at the local SQL query.

* changes at the IgniteCompute public interface. Should we do any change in 
methods affinityRun/Call with specified partitions to reserve? 

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15356806#comment-15356806
 ] 

ASF GitHub Bot commented on IGNITE-2310:


Github user tledkov-gridgain closed the pull request at:

https://github.com/apache/ignite/pull/769


> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-29 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15355363#comment-15355363
 ] 

Taras Ledkov commented on IGNITE-2310:
--

Re-implement the feature. Drop the implementation by the {{AffinityJobWrapper}} 
at the {{GridClosureProcessor}} because in this way many objects related to the 
task are created for each attempt. Also on the node a lot of unused actions are 
executed before partitions reservation.

The reserving partition logic is moved to the class {{GridJobProcessor}}. The 
retrying logic is moved to the class {{GridTaskWorker}}.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' 
> approach, but personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation 
> I think this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be 
> complicated refactoring
> - I see some difficulties how to understand that caches have same affinity. 
> If user uses custom function should he implement 'equals'? For standard 
> affinity functions user can set backup filter, what do in this case? should 
> user implement 'equals' for filter? Even if affinity functions are the same 
> cache configuration can have node filter, so affinity mapping will be 
> different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-01 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310351#comment-15310351
 ] 

Taras Ledkov commented on IGNITE-2310:
--

Semen proposes to add {{partsToLock}} handling logic & job's retry behavior to 
the core logic: {{GridJobExecuteRequest}} and {{GridJobProcessor}}  instead of 
the create particular job wrapper for affinityRun.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-01 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310183#comment-15310183
 ] 

Taras Ledkov commented on IGNITE-2310:
--

I guess the method {{affinityRun(String cacheName, Object affKey, 
IgniteRunnable job, Map partsToLock)}} is useful.
May be this method should be private for the class {{IgniteComputeImpl}}. In 
this case we could define public API more flexible.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310174#comment-15310174
 ] 

ASF GitHub Bot commented on IGNITE-2310:


GitHub user tledkov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/769

IGNITE-2310 Lock cache partition for affinityRun/affinityCall execution



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2310

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/769.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #769


commit 4b65ae906f43a70f914fb33fcf7a65108f39c2cb
Author: tledkov-gridgain 
Date:   2016-05-27T12:16:27Z

add test

commit 5c46507045da69d7613b57a528a82011fc383c65
Author: tledkov-gridgain 
Date:   2016-05-27T15:07:03Z

test to validate issue

commit a7f9edd6d05cef68b949993d920943fa8fdd5ee2
Author: tledkov-gridgain 
Date:   2016-05-30T13:25:10Z

Merge remote-tracking branch 'remotes/origin/master' into ignite-2310

commit 33524cf9089b1b743923faeb47f3b8421185c7fc
Author: tledkov-gridgain 
Date:   2016-06-01T11:45:08Z

IGNITE-2310 Lock cache partition for affinityRun/affinityCall execution: 
add method affinityRun(cacheName, affKey, job,  
Map 
partsToLock) ;




> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-06-01 Thread Semen Boikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309913#comment-15309913
 ] 

Semen Boikov commented on IGNITE-2310:
--

Alexei,
Reservation does not prevent rebalancing, it should prevent only partition 
eviction from old partition owner nodes. There should not be any issues: when 
new node joins it will start rebalancing, when rebalancing finishes then jobs 
will be mapped to this new node.

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently 
> failed over to current owner.
> # I don't think user should provide list of caches. How about reserving only 
> one partition, but evict partitions after all partitions in all caches (with 
> same affinity function) on this node are locked for eviction. [~sboikov], can 
> you please comment? It seems this should work faster for closures and will 
> hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, 
> IgniteCallable)}} and same for Runnable. This will allow me not to mess with 
> affinity key in case I know partition before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution

2016-05-30 Thread Alexei Scherbakov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306739#comment-15306739
 ] 

Alexei Scherbakov commented on IGNITE-2310:
---

Dosen't that mean what under heavy load, on example, if SQL queries are 
constantly run against partition,
rebalancing for this partition will never have the chance for completion ?

> Lock cache partition for affinityRun/affinityCall execution
> ---
>
> Key: IGNITE-2310
> URL: https://issues.apache.org/jira/browse/IGNITE-2310
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community
> Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity 
> node when a compute job is being sent to the node. The partition has to be 
> locked on the cache until the compute job is being executed. This will let to 
> execute queries safely (Scan or local SQL) over the data that is located 
> locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} 
> and {{affinityRun}} methods that accept list of caches which partitions have 
> to be locked at the time a compute task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple 
> caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing 
> Organisation and Persons caches' names to the method to be sure that the 
> partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o 
> WHERE p.orgId=o.id' on a changing topology. The result set must be complete, 
> the partition over which the query will be executed mustn't be moved to the 
> other node. Due to affinity collocation the partition number will be the same 
> for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} 
> is executed.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)