[
https://issues.apache.org/jira/browse/BOOKKEEPER-362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485307#comment-13485307
]
Sijie Guo commented on BOOKKEEPER-362:
--------------------------------------
{quote}
On the region address versus region name. If the cfg.getRegions() returns a
data structure where (VIP/DNS-entry, region-name/alias), we could use the
name/alias. Current configuration does not have that information.
It is unlikely VIP/DNS-entry change, similar with changing region name.
We can open a separate tracking item if you feel strongly about it.
{quote}
I just raise a concern and if ur are OK with that VIP name will not change, I
am OK with it too.
{quote}
The ZKVersion used in the <topic,region> map is used to guard against the case
different hub-server use the map at the same time (or interleaved).
The update to the ZK-node is to update/track the metadata version, to guard
against of different hubs trying to manipulate it.
{quote}
Yes. This is the same issue for every metadata storages. That is why I asked to
have a common interface read/write/remove metadata with Version and make the
RemoteSubscriptionManager just takes the responsibility of read/write/remove
metadata only. The relationship between RegionManager and
RemoteSubscriptionManager would be similar as that between
BookKeeperPersistenceManager and TopicPersistenceInfoManager.
BookKeeperPersistenceManager read metadata from TopicPersistenceInfoManager
with Version, write metadata to TopicPersistenceInfoManager with the Version
from last read. So my point for RegionManager is that it should have the
Version of metadata for RemoteSubscriptionManager, otherwise each
RemoteSubscriptionManager implementation needs to have a data structure to
maintain its version internally and the generic Version interface is
meaningless.
{quote}
I do not expect # of regions will be in the magnitude that serialized
region-list would make much difference w.r.t. performance.
{quote}
Why I proposed to batch the regions list into a single metadata is
suppose we have M millions of Topics, K regions.
1) we might have M * K entries of metadata. if we batched, we just have M
entries of metadata. This is the difference in a metadata storage.
2) when removed the metadata on last local unsubscribed, K remove operations
would be issued independently if not batched. if we batched, we just have one
remove and guarantee all the remote subscriptions are removed.
I just wrote down my concerns about this metadata interface and try to make it
convient to implement in different metadata storage and make it not be an issue
when the scale of metadata is huge. I would like to see others' opinions.
[~fpj] [~ikelly] It would be better to have your opinions on the metadata
interface.
> Local subscriptions fail if remote region is down
> -------------------------------------------------
>
> Key: BOOKKEEPER-362
> URL: https://issues.apache.org/jira/browse/BOOKKEEPER-362
> Project: Bookkeeper
> Issue Type: Bug
> Components: hedwig-server
> Affects Versions: 4.2.0
> Reporter: Aniruddha
> Assignee: Yixue (Andrew) Zhu
> Priority: Critical
> Labels: hedwig
> Fix For: 4.2.0
>
> Attachments: rebase_remoteregion.patch, rebase_remoteregion.patch
>
>
> Currently, local subscriptions fail if the remote region hubs are down, even
> if the local hub has subscribed to the remote topic previously. Because of
> this, one region cannot function independent of the other.
> A more detailed discussion related to this can be found here
> http://mail-archives.apache.org/mod_mbox/zookeeper-bookkeeper-dev/201208.mbox/%3cCAOLhyDQSOF+Y+pvnyrd-HJRq1YEr=c8ok_b3_mr81r1g-9m...@mail.gmail.com%3e
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira