If you need to traverse over the local data on all the nodes then broadcast
a compute task to all of them and use ScanQuery with setLocal flag set to
true.

Also, you can load balance the load by going for a similar approach with an
affinity call per partition:
https://www.gridgain.com/docs/latest/developers-guide/collocated-computations#collocating-by-partition

The benefit of the affinity-based methods of the compute api is that a
partition will be locked and won't be evicted until the computation is
finished. The partition can be evicted if a cluster topology has changed,
the partition was rebalanced to another node and now needs to be removed
from the node the compute is running on.

-
Denis


On Sun, Nov 17, 2019 at 7:43 PM camer314 <[email protected]>
wrote:

> Reading a little more in the Java docs about AffinityKey, I am thinking
> that,
> much like vector UDF batch sizing, one way I could easily achieve my result
> is to batch my rows into affinity keys. That is, for every 100,000 rows the
> affinity key changes for example.
>
> So cache keys [0...99999] have affinity key 0, keys [100000...199999] have
> affinity key 1 etc?
>
> If that is the case, may I suggest you update the .NET documentation for
> Data Grid regarding Affinity Colocation as it does not mention the use of
> AffinityKey or go into anywhere near as much detail as the Java docs.
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to