Hi Sure. Thanks for your interest by the way.
We needed to update our cache objects concurrently and every time we do the update on cache objects, we wanted to calculate some results by comparing before and after the update. To make this use case fast enough we wanted to use IgniteDataStreamer to send the update events from Kafka as batches to the Ignite Server Nodes to do the calculation where the data is. But we come across a concurrency issue where 2 different threads were updating the cache object with the same key. The last thread finishing was overriding the other threads changes. So we decided to use Deadlock Free Optimistic Transactional support from Ignite as it is much better performing than using pessimistic transactions. To achieve that we implemented AbstractTransactionalStreamReceiver. In this implementation, we are caching OptimisticTransactionalExceptions and then trying to reapply the changes by retrying up to a certain limit. Later, we also had another problem which was related to the update sequence. Imagine you get update for fieldX for cacheObjectX at timeStamp=100 and then also get an update to the same fieldX and same cacheObjectX at timeStamp=200. In distributed environment, we don't know which update will run first. If the first update(timeStamp=100) happens first then we are safe but what happens if the second update(timeStamp=200) happens first then the timeStamp=100 update happens? In that case, we will override more recent data in our cacheObjectX. So to solve that we design our model to have "fieldX__timestamp" so this applies to other columns as well. First, check the update is more recent if not then don't do the update. To achieve that we implemented TimestampBasedUpdateStreamReceiver class. I hope that I managed to explain, please let us know if you have questions? -- View this message in context: http://apache-ignite-developers.2346864.n4.nabble.com/DataStreamer-Transactional-and-Timestamp-Implementation-tp19129p19199.html Sent from the Apache Ignite Developers mailing list archive at Nabble.com.