I want to remove the old data from cache once a day, the amout of data is
abount 5 millions.
My main remove processes as below:      
        
        SqlFieldsQuery Sql = new SqlFieldsQuery(
                          "select id, name from Table2 where data = xxxx");

        cursor = Cache.query(Sql);

                Set<Table2Key> Keys = new HashSet<>();
                Table2Key Key = new Table2Key();
                
        for (List<?> row : cursor)
        {
                        Key.setid(Long.valueOf((String) row.get(0)));
                        Key.setname((String) row.get(1));
                        Keys.add(Key);
        }
                
            for (Table2Key Key1 : Keys) {
                Cache.remove(Key1);
            }
                
But about 10 minutes after the start of data removing, the Ignite cluster
will be jammed. 
I can see 
"[SEVERE][tcp-client-disco-sock-writer-#2%null%][TcpDiscoverySpi] Failed to
send message: null
java.io.IOException: Failed to get acknowledge for message:
TcpDiscoveryClientHeartbeatMessage [super=TcpDiscoveryAbstractMessage" from
log.
And I can not connect to the cluster by "ignitevisorcmd".
If the amount of data is relatively small, the above method is OK.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-remove-large-amount-of-data-from-cache-tp10024.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to