Hi,

I'm not arguing which stack is better. I just posted what I found to better understand the status of these implementations so that we can improve them to better support the nature of the SCA domain registry. In fact, I have enhanced the Hazelcast-based registry with entry listeners and a test case.

See more comments inline.

Thanks,
Raymond
--------------------------------------------------
From: "ant elder" <[email protected]>
Sent: Tuesday, January 19, 2010 12:50 AM
To: <[email protected]>
Subject: Re: Issues related to the Hazelcast based endpoint registry

None of the endpoint registry implementations that we have so far work
completly for everything we need, the Hazelcast one looks to me like
its the best one so far in terms of functionality and ease of use so
I'm focusing on using and improving it. Are you just trying to get an
understanding of where things are at with it or are you suggesting
that one of the other impls may have more promise?

Some coments in line

On Tue, Jan 19, 2010 at 5:47 AM, Raymond Feng <[email protected]> wrote:
Hi,

I was trying endpoint-hazelcast. The distributed map from Hazelcast doesn't seem to work out of the box for our SCA domain registry which requires the
following:

1) Each member can contribute entries to the map and they can be seen by all
members in the group (
2) The entries added by a member (the Hazelcast instance where getMap () is
called) are owned by the member
3) When the member leaves the group, all entries owned by the member should
be removed.

Hazelcast can support 1), but we need to do some work to get 2 & 3 working. As far as I see the from test case [1], Hazelcast IMap.localKeySet() is not
the ones that are added locally by the owning member.


Thats not actually what the localKeySet method is for as its also
returning keys that are being backed up by the node (which happens by
default though we don't need this backup function so could switch it
off). Is there a problem with using the isLocal method on the
HazelcastEndpointRegistry?

In the test case, if I set the backup count to 0, after reg1 is stopped, all the entries including the one added by reg2 are gone. Hazelcast picks a member to keep the data, and it happens in this case reg1 is used to keep the two entries. We'll have to use backup-count>1.


I could not find a built-in eviction policy from Hazelcast that suits this need. And it seems that Hazelcast doesn't give us the owner (the member that
put the
entry) of a key.


No it doesn't out-of-the-box (the Tribes one doesn't seem to work
properly either), I've looked at _lots_ of these clustering toolkits
and none of them look like they do quite what we'd ideally have for
this, but Hazelcast does have features we can use to implement this
ourselves. I've not started implementing it yet as it doesn't seem to
really matter if there's old endpoints left in the registry for any of
the use cases I've had. Does this cause a problem for what you need?
If so lets just fix it, the simplest way would be to have a member
listener watching for members leaving and have that remove all the
endpoints from that member.

I have implemented the ephemeral behavior for the tribes based registry.

I also added MembershipListener to the Hazelcast one to receive the memberRemoved event. But Hazelcast doesn't give me the ability to find out if an entry is added by a given member. It only tells us which member holds the data locally.

The removal and notification of dead entries is important to OSGi remote services with SCA. We need to remove the imported service proxy when an endpoint is removed.


...ant

Reply via email to