We actually use different mappers assigned to different caches using the
same source key, eg:
public class Key
{
p,
x,
y
}
public class AffinityFunctionOne : IAffinityFunction
{
public int GetPartition(object key)
{
return PartitionHash((object as Key).p)
}
}
public class
For our case, we have a stateful instance associated with each user session,
that actually handles user requests. That stateful instance is usually
long-lived. In this case, we want the data for that instance to be stored on
other nodes - at the dispatch stage, we choose a node to host the session
Could you give an example of such mapping?
If that’s possible, it might also be very helpful to see the implementation
of your mapper. Looking at the code is often the best way to understand a
use case :)
-Val
On Wed, Nov 4, 2020 at 12:29 PM Raymond Wilson
wrote:
> Actually, it's worse than
Actually, it's worse than that...
We have more than one key -> partition mapping for the same key (part of a
CQRS pattern we use).
Aren't key affinity functions essentially an API in any event?
Cheers,
Raymond.
On Wed, Nov 4, 2020 at 9:54 PM Valentin Kulichenko <
I've created a ticket for this:
https://issues.apache.org/jira/browse/IGNITE-13671
-Val
On Wed, Nov 4, 2020 at 12:53 AM Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:
> Thanks, Raymond. So the reason why you couldn't use the @AffinityKeyMapped
> annotation or the
Thanks, Raymond. So the reason why you couldn't use the @AffinityKeyMapped
annotation or the CacheKeyConfiguration is that collocation is based on
*two fields*, not just one field. Is my understanding correct?
If that's the case, I believe it can be easily improved by providing ways
to specify
If I have a primary key vector like this , where & can
have varying values for , and I want all keys with the same and
to reside in the same partition for processing colocation requirements then
I can't use the standard Ignite mapping to do this, I need to use a custom
mapper than uses just
Raymond,
Thanks for the details. So it sounds like you have a custom affinity
mapper, not affinity function. This makes things simpler, but I'm still
failing to understand why standard mechanisms for collocation [1] didn't
work for you. Could you please clarify?
[1]
In terms of Key -> Partition -> Node mapping, we provide customer affinity
mappers for the Key -> Partition stage an allow Ignite to map partitions to
nodes.
Our keys are structs with multiple fields forming a composite primary key,
parts of which are spatially identifying and parts contain other
Moti, Raymond,
Could you please describe your use cases in more detail? What are the types
used as cache keys? What is the custom logic that you use for affinity
mapping? What was the exact reason to customize versus using built-in
collocation mechanisms?
Ultimately, I'm sure that custom
Thanks for the clarification.
There was no intention to remove the customizable key to partition mapping.
Difficulties arise when mapping partitions to nodes, so it's desirable to
have internally tested implementation with a way to customize it's behavior
without additional coding on the user
Just to be clear, the affinity functions we are using convert keys to
partitions, we do not map partitions to nodes and leave that to Ignite.
On Tue, Nov 3, 2020 at 8:48 AM Alexei Scherbakov <
alexey.scherbak...@gmail.com> wrote:
> Hello.
>
> Custom affinity functions can cause weird bugs and
Hello.
Custom affinity functions can cause weird bugs and data loss if implemented
wrongly.
There is an intention to keep a backup filter based on user attributes
(with additional validation logic to ensure correctness) for controllable
data placement.
Can you describe more precisely why you
We also use custom affinity functions (vis the C# client).
The wish list mentions use of a particular annotation
(@CentralizedAffinityFunction):
Is the wish to remove just this annotation, or the ability to define custom
affinity functions at all?
In our case we use affinity functions to ensure
14 matches
Mail list logo