Cool. When are you guys planning to release the generalized component?
On Fri, Dec 2, 2016 at 10:57 AM, Anjana Fernando wrote:
> Hi guys,
>
> So the generalized coordination component was done by SameeraR, and the
> discussions for that can be seen at [1] and [2]. We've identified some
> improve
Currently we are in the review process of the code and making slight
adjustments to the algorithm too. Probably things will be finalized early
next week and then we can work on putting it in a common repo.
On Fri, Dec 2, 2016 at 11:15 AM, Asanka Abeyweera wrote:
> Cool. When are you guys plannin
Hi guys,
So the generalized coordination component was done by SameeraR, and the
discussions for that can be seen at [1] and [2]. We've identified some
improvements that also can be done to it, to make it more stable.
[1] "Updated Invitation: Code Review for the implementation of RDBMS based
coor
Hi all,
Following is the PR[1] related the implementation. This is currently merged
to the master branch and will be a feature of the next MB release (3.2.0).
[1] https://github.com/wso2/andes/pull/668
On Mon, Nov 7, 2016 at 12:45 PM, Ramith Jayasinghe wrote:
> +1
>
> On Mon, Nov 7, 2016 at 12
+1
On Mon, Nov 7, 2016 at 12:40 PM, Anjana Fernando wrote:
> Hi Ramith,
>
> Sure. Actually, I was talking with SameeraR to take over this and create a
> common component which has the required coordination functionality. The
> idea is to create a component, where the providers can be plugged in,
Hi Ramith,
Sure. Actually, I was talking with SameeraR to take over this and create a
common component which has the required coordination functionality. The
idea is to create a component, where the providers can be plugged in, such
as the RDBMS based one, ZK, or any other container specific provi
this might require some work.. shall we have a chat?
On Thu, Nov 3, 2016 at 3:52 PM, Anjana Fernando wrote:
> Ping! ..
>
> On Wed, Nov 2, 2016 at 5:03 PM, Anjana Fernando wrote:
>
>> Hi,
>>
>> On Wed, Nov 2, 2016 at 3:14 PM, Asanka Abeyweera
>> wrote:
>>
>>> Hi Anjana,
>>>
>>> Currently, the i
Ping! ..
On Wed, Nov 2, 2016 at 5:03 PM, Anjana Fernando wrote:
> Hi,
>
> On Wed, Nov 2, 2016 at 3:14 PM, Asanka Abeyweera
> wrote:
>
>> Hi Anjana,
>>
>> Currently, the implementation is part of the MB code (not a common
>> component).
>>
>
> Okay, can we please get it as a common component.
>
Hi,
On Wed, Nov 2, 2016 at 3:14 PM, Asanka Abeyweera wrote:
> Hi Anjana,
>
> Currently, the implementation is part of the MB code (not a common
> component).
>
Okay, can we please get it as a common component.
Cheers,
Anjana.
>
> On Wed, Nov 2, 2016 at 3:00 PM, Anjana Fernando wrote:
>
>> H
Hi Anjana,
Currently, the implementation is part of the MB code (not a common
component).
On Wed, Nov 2, 2016 at 3:00 PM, Anjana Fernando wrote:
> Hi Asanka/Ramith,
>
> So for C5 based Streaming Analytics solution, we need coordination
> functionality there as well. Is the functionality mention
Hi Asanka/Ramith,
So for C5 based Streaming Analytics solution, we need coordination
functionality there as well. Is the functionality mentioned here created as
a common component or baked in to the MB code? .. if so, can we please get
it implemented it as a generic component, so other products ca
Great! ..
Cheers,
Anjana.
On Tue, Aug 9, 2016 at 1:49 PM, Asanka Abeyweera wrote:
> Hi Anjana,
>
> Thank you for the suggestion. We have already done a similar thing. We
> have added a backoff time after creating the leader entry and check if the
> leader entry is the entry created by self befo
Hi Anjana,
Thank you for the suggestion. We have already done a similar thing. We have
added a backoff time after creating the leader entry and check if the
leader entry is the entry created by self before informing the leader
change.
On Tue, Aug 9, 2016 at 12:27 PM, Anjana Fernando wrote:
> I
I see, thanks for the clarification, looks good! .. I think small thing to
consider is, to avoid the situation where, the current leader goes away,
and two other competes to become the leader, and the first one and the
second one checks (reads) the table to check the last heartbeat and figures
out
Hi Anjana,
After having an offline chat with Asanka what I understood was that the
leader election was done completely via the database but with no network
communication. The leader is mentioned in the database first. Then the
leader updates the node data periodically in the database. If some node
Hi Anjana,
On Tue, Aug 9, 2016 at 10:13 AM, Anjana Fernando wrote:
> Hi,
>
> I just noticed this thread. I've some concerns on this implementations.
> First of all, I don't think the statement mentioned here saying an external
> service such as ZooKeeper doesn't work, is correct. Because, if y
Hi,
I just noticed this thread. I've some concerns on this implementations.
First of all, I don't think the statement mentioned here saying an external
service such as ZooKeeper doesn't work, is correct. Because, if you have a
ZK cluster (it is suppose to be used as a cluster), you will not have a
Hi Ramith/Asanka,
ESB/DSS natask impl is also based on HZ. I guess if this model works for
the MB, we should make it generic for all such coordination requirements.
(Thinking about using this in ESB 5.1)?
On Fri, Aug 5, 2016 at 3:58 AM, Sajini De Silva wrote:
> Hi Maninda,
>
> Locking the data
Hi Maninda,
Locking the database will be supported by some databases but there will
be huge performance impact. So we cannot use that approach. If this
approach cannot be adapted the only thing we can do is queue wise load
balancing through slot coordinator. But in this case we cannot guarant
Hi Sajini,
Yes that is what I meant. As the number of slots are proportional to the
number of messages passing through the cluster, slot delivery should not be
handled by the coordinator when there is only one coordinator in the
cluster which is a bottleneck for scaling messages passing through th
Hi Maninda,
On Fri, Aug 5, 2016 at 2:28 PM, Maninda Edirisooriya
wrote:
> @Sajini,
>
> But the number of slots are proportional to the number of messages pass
> through the MB which needs to be handled by the coordinator. That is what I
> meant by "information related to meta data of messages pa
@Sajini,
But the number of slots are proportional to the number of messages pass
through the MB which needs to be handled by the coordinator. That is what I
meant by "information related to meta data of messages pass through a
single coordinator". Ideally after the senders and receivers are subscr
@Imesh,
We can prove that doing leader election using a lib (where we maintain
cluster state in another place, a.k.a DB) will not solve our original
problem (this also relates to our past experience with both the zookeeper
and hazelcast).
We can make this implementation a common component if othe
Hi Maninda,
We are not using one coordinator to send and receive messages. All the
nodes in the cluster can receive and send messages to MB and messages will
be written to database by multiple nodes. Also messages will be read from
the database by multiple nodes. In MB we have a concept called sl
On Fri, Aug 5, 2016 at 12:00 PM, Hasitha Hiranya wrote:
> Hi,
>
>
> On Fri, Aug 5, 2016 at 11:31 AM, Akila Ravihansa Perera <
> raviha...@wso2.com> wrote:
>
>> Hi,
>>
>> I think the original problem here is that MB needs to absolutely
>> guarantee the integrity of the data written to the database
The same issue with Hazelcast can be experienced with ESB inbounds (running
on top of NTASK) and VFS distribution locks.
The idea of only single worker works at a given time breaks if there is a
Hazelcast heart beat fails. This will make two workers to work in parallel.
Also with distributed lock
Hi,
On Fri, Aug 5, 2016 at 11:31 AM, Akila Ravihansa Perera
wrote:
> Hi,
>
> I think the original problem here is that MB needs to absolutely guarantee
> the integrity of the data written to the database. And if I understood
> correctly, only the coordinator can write specific entries to the da
Hi,
I think the original problem here is that MB needs to absolutely guarantee
the integrity of the data written to the database. And if I understood
correctly, only the coordinator can write specific entries to the database
which is a unique scenario for MB. Any network based leader election
algo
Hi Asitha/Asanka,
I think it is clear that the issue we have here is mostly related to
Hazelcast.
Now to solve that problem I think it would be better to go ahead with a
generic leader election system for the entire platform rather than writing
one specific to MB. This requirement is there in sev
Hi Imesh,
On Fri, Aug 5, 2016 at 7:33 AM, Imesh Gunaratne wrote:
>
>
> On Fri, Aug 5, 2016 at 7:31 AM, Imesh Gunaratne wrote:
>>
>>
>> You can see here [3] how K8S has implemented leader election feature for
>> the products deployed on top of that to utilize.
>>
>
> Correction: Please refer [4
Hi Imesh,
On Fri, Aug 5, 2016 at 7:33 AM, Imesh Gunaratne wrote:
>
>
> On Fri, Aug 5, 2016 at 7:31 AM, Imesh Gunaratne wrote:
>>
>>
>> You can see here [3] how K8S has implemented leader election feature for
>> the products deployed on top of that to utilize.
>>
>
> Correction: Please refer [4
On Fri, Aug 5, 2016 at 7:31 AM, Imesh Gunaratne wrote:
>
>
> You can see here [3] how K8S has implemented leader election feature for
> the products deployed on top of that to utilize.
>
Correction: Please refer [4].
>
>
>> On Thu, Aug 4, 2016 at 7:27 PM, Asanka Abeyweera
>> wrote:
>>
>>> Hi
Leader election is currently based on hazelcast and things get complicated
when a network partition happens. if the node looses access to database and
the others in the cluster that's comparatively safe ( when nodes are not
incurring moderate load).
Now the problem really is in situations where ne
Hi Imesh,
We are not implementing this to overcome a limitation in the coordination
algorithm available in the Hazlecast. We are implementing this since we
need an RDBMS based coordination algorithm (not a network based algorithm).
The reason is, a network based election algorithm will always elec
Hi Asanka,
Do we really need to implement a leader election algorithm on our own?
AFAIU this is a complex problem which has been already solved by several
algorithms [1]. IMO it would be better to go ahead with an existing well
established implementation on etcd [1] or Consul [2].
Those provide H
Hi Maninda,
Since we are using RDBMS to poll the node status, the cluster will not end
up in situation 1,2 or 3. With this approach we consider a node unreachable
when it cannot access the database. Therefore an unreachable node can never
be the leader.
As you have mentioned, we are currently usi
Hi Asanka,
As I understand the accuracy of electing the leader correctly is dependent
on the election mechanism with RDBMS because there can be edge cases like,
1. Unreachable leader activates during the election process: Then who
becomes the leader?
2. The elected leader becomes unreachable befo
Hi Akila,
Let me explain the issue in a different way. Let's assume the MB nodes are
using two different network interfaces for Hazelcast communication and
database communication. With such a configuration, there can be failures
only in the network interface used for Hazelcast communication in som
Hi,
What's the advantage of using RDBMS (even as an alternative) to implement a
leader/coordinator election? If the network connection to DB fails then
this will be a single point of failure. I don't think we can scale RDBMS
instances and expect the election algorithm to work. That would be reduci
+1 to make it a common component . We have the clustering implementation
for BPEL component based on hazelcast. If the coordination is available at
RDBMS level, we can remove hazelcast dependancy.
Regards
Nandika
On Thu, Jul 28, 2016 at 1:28 PM, Hasitha Aravinda wrote:
> Can we make it a commo
Can we make it a common component, which is not hard coupled with MB. BPS
has the same requirement.
Thanks,
Hasitha.
On Thu, Jul 28, 2016 at 9:47 AM, Asanka Abeyweera wrote:
> Hi All,
>
> In MB, we have used a coordinator based approach to manage distributed
> messaging algorithm in the cluster
41 matches
Mail list logo