[
https://issues.apache.org/jira/browse/IGNITE-4157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15741923#comment-15741923
]
Sergey Chugunov edited comment on IGNITE-4157 at 12/12/16 2:42 PM:
-------------------------------------------------------------------
h3. Design notes for marshaller mapping refactoring (final version)
# Each node holds a local cache of typeId->mappedName mappings. *MappedName*
contains typeName and a status of mappedName (*proposed* or *accepted*).
# Node is allowed to use proposed mappedName only to unmarshal incoming
requests. In order to use it for marshalling it should wait until mappedName
gets transferred to accepted status.
# Requesting new mapping is two-phase process and requires two messages to pass
across the cluster: *MappingProposedMessage* and *MappingAcceptedMessage* as a
successful acknowledge message for the first one.
# Requesting new mapping is synchronous process: when node requests new
mapping, requesting thread is blocked until mapping is accepted or rejected.
# If two nodes simultaneously propose mappings for classes with conflicting
names, only the mapping will be accepted which *MappingProposedMessage* reaches
coordinator node first. Other mapping proposed request will be rejected,
*MappingRejectedMessage* will be sent to requesting node as a failure
acknowledge.
# If two nodes simultaneously propose mapping for exactly the same class, this
situation will be treated by coordinator node as a duplicate request, no
acknowledge message will be sent. Node that sent duplicate request will be
unblocked from waiting on mapping accepted when accept for the first request
passes across the ring.
# Functionality of persisting accepted mappings in filesystem is preserved,
file name format slightly changed as support for mappings on different
platforms was added. Pattern looks like this now:
*<typeId>.classname<platformId>*, e.g. *1033.classname0*
h3. Other improvements
As part of this task process of collecting discovery data on node join was
refactored with following improvements.
# API of *DiscoverySpi* and *DiscoverySpiDataExchange* interfaces is changed to
use *DiscoveryDataContainer* entity to hold and manage discovery data instead
of previously used maps.
# *DiscoveryDataContainer* enables components to collect node-specific and
grid-common discovery data separately.
In case of *MarshallerMappingProcessor* component discovery data is exactly the
same on all nodes of grid thus it can be collected only once on coordinator
node to reduce network traffic and memory consumption on duplicated data.
# API of *GridComponent* was improved to distinguish events
*onJoiningNodeDataReceived* which is called on all nodes already in grid and
*onGridDataReceived* which is called on newly joining node on receiving
discovery data collected by grid.
was (Author: sergey-chugunov):
h3. Design notes for marshaller mapping refactoring (final version)
# Each node holds a local cache of typeId->mappedName mappings. *MappedName*
contains typeName and a status of mappedName (*proposed* or *accepted*).
# Node is allowed to use proposed mappedName only to unmarshal incoming
requests. In order to use it for marshalling it should wait until mappedName
gets transferred to accepted status.
# Requesting new mapping is two-phase process and requires two messages to pass
across the cluster: *MappingProposedMessage* and *MappingAcceptedMessage* as a
successful acknowledge message for the first one.
# Requesting new mapping is synchronous process: when node requests new
mapping, requesting thread is blocked until mapping is accepted or rejected.
# If two nodes simultaneously propose mappings for classes with conflicting
names, only the mapping will be accepted which *MappingProposedMessage* reaches
coordinator node first. Other mapping proposed request will be rejected,
*MappingRejectedMessage* will be sent to requesting node as a failure
acknowledge.
# If two nodes simultaneously propose mapping for exactly the same class, this
situation will be treated by coordinator node as a duplicate request, no
acknowledge message will be sent. Node that sent duplicate request will be
unblocked from waiting on mapping accepted when accept for the first request
passes across the ring.
# Functionality of persisting accepted mappings in filesystem is preserved,
file name format slightly changed as support for mappings on different
platforms was added. Pattern looks like this now:
*<typeId>.classname<platformId>*, e.g. *1033.classname0*
h3. Other improvements
As part of this task process of collecting discovery data on node join was
refactored with following improvements.
# API of *DiscoverySpi* and *DiscoverySpiDataExchange* interfaces is changed to
use *DiscoveryDataContainer* entity to hold and manage discovery data instead
of previously used maps.
# *DiscoveryDataContainer* enables components to collect node-specific and
grid-common discovery data separately.
In case of *MarshallerMappingProcessor* component discovery data is exactly the
same on all nodes of grid thus it can be collected only once on coordinator
node.
# API of *GridComponent* was improved to distinguish events
*onJoiningNodeDataReceived* which is called on all nodes already in grid and
*onGridDataReceived* which is called on newly joining node on receiving
discovery data collected by grid.
> Use discovery custom messages instead of marshaller cache
> ---------------------------------------------------------
>
> Key: IGNITE-4157
> URL: https://issues.apache.org/jira/browse/IGNITE-4157
> Project: Ignite
> Issue Type: Improvement
> Components: cache
> Reporter: Alexey Goncharuk
> Assignee: Sergey Chugunov
> Fix For: 2.0
>
>
> Currently we use system caches for keeping classname to class ID mapping and
> for storing binary metadata
> This has several serious disadvantages:
> 1) We need to introduce at least two additional thread pools for each of
> these caches
> 2) Since cache operations require stable topology, registering a class ID or
> updating metadata inside a transaction or another cache operation is tricky
> and deadlock-prone.
> 3) It may be beneficial in some cases to have nodes with no caches at all,
> currently this is impossible because system caches are always present.
> 4) Reading binary metadata leads to huge local contention, caching metadata
> values in a local map doubles memory consumption
> I suggest we use discovery custom events for these purposes. Each node will
> have a corresponding local map (state) which will be updated inside custom
> event handler. From the first point of view, this should remove all the
> disadvantages above.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)