On Feb 22, 2012, at 5:26 PM, Galder Zamarreño wrote:

> 
> On Feb 22, 2012, at 1:00 PM, Manik Surtani wrote:
> 
>> 
>> On 22 Feb 2012, at 10:05, Galder Zamarreño wrote:
>> 
>>>> 
>>>>> For 2, imagine a client that starts a remote cache manager, signs up for 
>>>>> notifications in cache C, and has 50 threads interacting with cache C 
>>>>> concurrently (so, 50 channels are open with the server). I don't want the 
>>>>> server to send back 50 events for each interested cache operation that 
>>>>> happens on the server side. 1 notification should be enough. This is one 
>>>>> of the reasons I want "option #1".
>>>> 
>>>> Yes, the server definitely needs to be smart enough to identify multiple 
>>>> connections from the same client, and this also needs to be distributed.  
>>> 
>>> +1, but the question is, how do you define "same client"? This is what I 
>>> was getting to with "origin" earlier (a way to identify cache managers). 
>>> You can't assume that a client IP to differentiate between different 
>>> clients cos you could have multiple Hot Rod clients running independently 
>>> in a machine.
>>> 
>>> If you have any other ideas, I'm happy to hear
>> 
>> Each client could be assigned a UUID when it first connects… and subsequent 
>> messages could include this UUID in a header.  Hmm, could get expensive 
>> though.
> 
> Hmmm, not sure that could work. It's the client that knows whether two 
> channels belong to the same client (i.e. two channels generated by same 
> remote cache manager for example).
> 
> So, I'm inclined to think that such ID should be generated by the client 
> itself. If you wanna avoid sending it with each request, you'd have to assume 
> that there's a first-request where that info comes, and assume that for the 
> rest of the time the channel is open, that won't change. This is a fair 
> assumption but complicates clients cos they need to differentiated between 
> the first and any subsequent requests.
> 
> I'm investigating other possibilities.

Btw, I'm reading 
http://sigops.org/sosp/sosp11/current/2011-Cascais/10-adya-online.pdf which 
talks about Google's Thialfi client notification service and they have a 
similar mechanism.

"When present, the optional source parameter identifies the client that made 
the change. (This ID is provided by the application client at startup and is 
referred to as its application ID.) As an optimization, Thialfi omits delivery 
of the notification to this client, since the client already knows about the 
change."

So, they go for the option of providing this logical ID on startup and then be 
able to avoid sending the notification to the originating node.

I think we could add this app or logica id as part of the add/register listener 
call and avoid sending it with all operations. Tbh, only those nodes that have 
register a listener care about knowing the event was generated locally or not, 
and only those care about not receiving notifications in all their channels.

So, I think this could work.

> 
>> 
>>> 
>>>> E.g., if client C is connected to 2 server nodes S1 and S2, we don't want 
>>>> both S1 and S2 to send back the same notification,
>>> 
>>> +1 again, this can very easily done by using the isLocal() call in 
>>> listeners. I was only planning to send notifications from the node where 
>>> the operation is local, which gets around this issue.
>> 
>> +1.
>> 
>>>> Also, what are your thoughts around batching notifications?  
>>> 
>>> This could be handy to avoid overloading clients as well, but wasn't in my 
>>> initial plans. 
>>> 
>>> What might be important at this stage is if we consider batching of 
>>> notifications to be important, whether we'd want to embedd it into the 
>>> protocol, so that an event notification response could return 1 to N 
>>> notifications in a single message. This would be more optimal and should 
>>> not result in a huge message since values will not be sent over, only keys 
>>> if anything.
>>> 
>>> Otherwise, batching of notifications could be implemented at a later stage 
>>> using a similar method to the replication queue. We could even consider 
>>> using disruptor instead of a blocking queue…
>> 
>> Lets start without batching then, at least for a first pass.
>> 
>> Cheers
>> Manik
>> 
>> --
>> Manik Surtani
>> [email protected]
>> twitter.com/maniksurtani
>> 
>> Lead, Infinispan
>> http://www.infinispan.org
>> 
>> 
>> 
>> 
>> _______________________________________________
>> infinispan-dev mailing list
>> [email protected]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> --
> Galder Zamarreño
> Sr. Software Engineer
> Infinispan, JBoss Cache
> 
> 
> _______________________________________________
> infinispan-dev mailing list
> [email protected]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache


_______________________________________________
infinispan-dev mailing list
[email protected]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to