On Tue, Apr 22, 2014 at 2:30 PM, Galder Zamarreño <[email protected]>
wrote:
On 17 Apr 2014, at 08:03, Radim Vansa <[email protected]> wrote:
On 04/16/2014 05:38 PM, William Burns wrote:
On Wed, Apr 16, 2014 at 11:14 AM, Galder Zamarreño
<[email protected]> wrote:
On 11 Apr 2014, at 15:25, Radim Vansa <[email protected]> wrote:
OK, now I get the picture. Every time we register to a node
(whether the
first time or after previous node crash), we receive all
(filtered) keys
from the whole cache, along with versions. Optionally values as
well.
Exactly.
In case that multiple modifications happen in the time window
before
registering to the new cache, we don't get the notification for
them,
just again the whole cache and it's up to application to decide
whether
there was no modification or some modifications.
I’m yet to decide on the type of event exactly here, whether
cache entry created, cache entry modified or a different one, but
regardless, you’d get the key and the server side version
associated with that key. A user provided client listener
implementation could detect which keys’ versions have changed
and react to that, i.e. lazily fetch new values. One such user
provided client listener implementation could be a listener that
maintains a near cache for example.
My current code was planning on raising a CacheEntryCreatedEvent in
this case. I didn't see any special reason to require a new event
type, unless anyone can think of a use case?
When the code cannot rely on the fact that created = (null -> some)
and
modified = (some -> some), it seems to me that the user will have
to
handle the events in the same way. I don't see the reason to
differentiate between them in protocol anyway.
One problem that has come to my mind: what about removed entries?
If you
push the keyset to the client, without marking start and end of
these
events (and expecting the client to fire removed events for all not
mentioned keys internally), the client can miss some entry deletion
forever. Are the tombstones planned for any particular version of
Infinispan?
That’s a good reason why a different event type might be useful. By
receiving a special cache entry event when keys are being looped, it
can detect that a keyset is being returned, for example, if the
server went down and the Hot Rod client transparently failed over to
a different node and re-added the client listener. The user of the
client, say a near cache, when it receives the first of this special
event, it can make a decision to say, clear the near cache contents,
since it might have missed some events.
The different event type gets around the need for a start/end event.
The first time the special event is received, that’s your start,
and when you receive something other than the special event, that’s
the end, and normal operation is back in place.
WDYT?
I'm not sure if you plan multi-threaded event delivery in the Java
client, but having a special start event would make it clear that it
must be delivered after all the events from the old server and before
any events from the new server.
And it should also make special cases like a server dying before it
finished sending the initial state easier to handle.
Dan
Radim
As the version for
entries is incremented per cache and not per value, there is no
way to
find out how many times the entry was modified (we can just know
it was
modified when we remember the previous version and these
versions differ).
Exaclty, the only assumption you can make is that the version
it’s different, and that’s it’s a newer version that the
older one.
Thanks for the clarifications, Galder - I was not completely
sure about
this from the design doc.
No probs
Btw., could you address Dan's question:
"Don't we want to allow the user to pass some data to the filter
factory
on registration?
Otherwise we'd force the user to write a separate filter factory
class
every time they want to track changes to a single key."
I know this was already asked several times, but the discussion
has
always dissolved. I haven't seen the final "NO”.
Radim
On 04/11/2014 02:36 PM, Galder Zamarreño wrote:
On 04 Apr 2014, at 19:11, William Burns <[email protected]>
wrote:
On Fri, Apr 4, 2014 at 3:29 AM, Radim Vansa
<[email protected]> wrote:
Hi,
I still don't think that the document covers properly the
description of
failover.
My understanding is that client registers clustered listeners
on one server
(the first one it connects, I guess). There's some space for
optimization,
as the notification will be sent from primary owner to this
node and only
then over hotrod to the client, but I don't want to discuss
it now.
There could be optimizations, but we have to worry about
reordering if
the primary owner doesn't do the forwarding. You could have
the case
of multiple writes to the same key from the clients and lets
say they
send the message to the listener after they are written to the
cache,
there is no way to make sure they are done in the order they
were
written to the cache. We could do something with versions for
this
though.
Versions do not provide global ordering. They are used, at each
node, to identify an update, so they’re incrementing at the
node level, mixed with some other data that’s node specific to
make them unique cluster wide. However, you can’t assume
global ordering based on those with the current implementation.
I agree there’s room for optimizations but I think correctness
and ordering are more important right now.
Listener registrations will survive node failures thanks to
the underlying
clustered listener implementation.
I am not that much into clustered listeners yet, but I think
that the
mechanism makes sure that when the primary owner changes, the
new owner will
then send the events. But when the node which registered the
clustered
listener dies, others will just forgot about it.
That is how it is, I assume Galder was referring to node
failures not
on the one that registered the listener, which is obviously
talked
about in the next point.
That’s correct.
When a client detects that the server which was serving the
events is
gone, it needs to resend it's registration to one of the
nodes in the
cluster. Whoever receives that request will again loop
through its contents
and send an event for each entry to the client.
Will that be all entries in the whole cache, or just from
some node? I guess
that the first is correct. So, as soon as one node dies, all
clients will be
bombarded by the full cache content (ok, filtered). Even if
these entries
have not changed, because the cluster can't know.
The former being that the entire filtered/converted contents
will be sent over.
Indeed the former, but the entire entry, only keys, and latest
versions, will be sent by default. Converters can be used to
send value side too.
This way the client avoids loosing events. Once all entries
have been
iterated over, on-going events will be sent to the client.
This way of handling failure means that clients will receive
at-least-once
delivery of cache updates. It might receive multiple events
for the cache
update as a result of topology changes handling.
So, if there are several modifications before the client
reconnects and the
new target registers the listener, the clients will get only
notification
about the last modification, or rather just the entry
content, right?
@Radim, you don’t get the content by default. You only get
the key and the last version number. If the client wants, it can
retrieve the value too, or using a custom converter, it can send
back the value, but this is optional.
This is all handled by the embedded cluster listeners though.
But the
end goal is you will only receive 1 event if the modification
comes
before value was retrieved from the remote node or 2 if
afterwards.
Also these modifications are queued by key and so if you had
multiple
modifications before it retrieved the value it would only give
you the
last one.
Radim
On 04/02/2014 01:14 PM, Galder Zamarreño wrote:
Hi all,
I've finally managed to get around to updating the remote hot
rod event
design wiki [1].
The biggest changes are related to piggybacking on the
cluster listeners
functionality in order to for registration/deregistration of
listeners and
handling failure scenarios. This should simplify the actual
implementation
on the Hot Rod side.
Based on feedback, I've also changed some of the class names
so that it's
clearer what's client side and what's server side.
A very important change is the fact that source id
information has gone.
This is primarily because near-cache like implementations
cannot make
assumptions on what to store in the near caches when the
client invokes
operations. Such implementations need to act purely on the
events received.
Finally, a filter/converter plugging mechanism will be done
via factory
implementations, which provide more flexibility on the way
filter/converter
instances are created. This opens the possibility for
filter/converter
factory parameters to be added to the protocol and passed,
after
unmarshalling, to the factory callbacks (this is not included
right now).
I hope to get started on this in the next few days, so
feedback at this
point is crucial to get a solid first release.
Cheers,
[1]
https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
--
Galder Zamarreño
[email protected]
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
_______________________________________________
infinispan-dev mailing list
[email protected]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Radim Vansa <[email protected]>
JBoss DataGrid QA
_______________________________________________
infinispan-dev mailing list
[email protected]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[email protected]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Galder Zamarreño
[email protected]
twitter.com/galderz
_______________________________________________
infinispan-dev mailing list
[email protected]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Radim Vansa <[email protected]>
JBoss DataGrid QA
_______________________________________________
infinispan-dev mailing list
[email protected]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Galder Zamarreño
[email protected]
twitter.com/galderz
_______________________________________________
infinispan-dev mailing list
[email protected]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[email protected]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Radim Vansa <[email protected]>
JBoss DataGrid QA
_______________________________________________
infinispan-dev mailing list
[email protected]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Galder Zamarreño
[email protected]
twitter.com/galderz
_______________________________________________
infinispan-dev mailing list
[email protected]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[email protected]
https://lists.jboss.org/mailman/listinfo/infinispan-dev