Following is my test steps about continuous query.

step 1: Start an Ignite Server by command "sh ignite.sh".

step 2: Start an Ignite Client, it will do continuous query for cache1.
The main code is as follows
          Ignition.setClientMode(true);
          Ignite ignite = Ignition.start();

          ContinuousQuery<Integer, String> qry = new ContinuousQuery<>();

                // Callback that is called locally when update notifications 
are received.
                qry.setLocalListener(new CacheEntryUpdatedListener<Integer, 
String>() {
                        @Override public void 
onUpdated(Iterable<CacheEntryEvent&lt;? extends
Integer, ? extends String>> evts) {
                                for (CacheEntryEvent<? extends Integer, ? 
extends String> e : evts)
                                        System.out.println("Updated entry 
[key=" + e.getKey() + ", val=" +
e.getValue() + ']');
                        }
                });

                // This filter will be evaluated remotely on all nodes.
                // Entry that pass this filter will be sent to the caller.
                qry.setRemoteFilterFactory(new 
Factory<CacheEntryEventFilter&lt;Integer,
String>>() {
                        @Override public CacheEntryEventFilter<Integer, String> 
create() {
                                return new CacheEntryEventFilter<Integer, 
String>() {
                                        @Override public boolean 
evaluate(CacheEntryEvent<? extends Integer, ?
extends String> e) {
                                                return e.getKey() > 10;
                                        }
                                };
                        }
                });

                // Execute query.
                QueryCursor<Cache.Entry&lt;Integer, String>> cur = 
cache1.query(qry)
                
step 3: Start another Ignite Client, it will put 10000 pieces of messages
into cache1.

Normally, the client of step 2 will output 10000 pieces of messages such as
"Updated entry [key=xxx, val=xxxx]".

But sometimes, during the client of step3 putting data into cache1, the
client of step2 restart due to fault. then step2 can not output 10000 pieces
of messages such as "Updated entry [key=xxx, val=xxxx]".
In other words, some events lost during client fault.

I want to know how to avoid the loss event in this case.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-avoid-the-event-lost-in-the-continuous-query-tp7904.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to