Guys, I was asked how effectively change many rows in cache many times. Currently, this can done via ScanQuery - we send affinityCall(), inside callable we iterate over local primary cache partitions, filter entries by predicate and then do cache puts for rows that are wanted to change.
I want to suggest more convenient API. Please share your thoughts. 1. Key Value pair in scan query is replaced with a single object IgniteRow which is basically a set of name-value pairs - the union of ones from key and value. If field names are not unique for a key-value pair then this pair is omitted with warning. 2. IgniteRow should be mutable. We can allow to change any field in the row and store results back to cache. If field belongs to cache key, then new key should be inserted and previous one removed. Optionally we can support throwing exception if key belongs to another node/partition after mutation. 3. Such updates can be enlisted to ongoing transaction. For simplicity, let them be local transactions for the node we are running scan query on. However, I would not bother with this for now. 4. I think it is inconvenient to convert binary object to builder, change field and serialize back to binary object. How about having BinaryObject replace(String fldName, Obj newVal)? If it is a simple replace then it can be done directly in the array or a copy of the initial array which seems to be times more efficient. Imagine we change an int field? Or a string to another string of the same length? This should also be applied to IgniteRow. --Yakov