1.a. Help with what? Do you know how Spark behaves in this case and what
guarantees does it provide? To be honest, I'm still struggling to understand
why you don't want to use Ignite API directly for updates. Is there a use
case that you tried to implement, but it didn't work for some reason?

1.b. Whether or not you need a transaction, depends on what you're trying to
achieve, but on number of backups. Backups help not to lose data in case of
node failures. Again, it's very hard to discuss without a particular use
case in mind.

2. Ignite can't do this of course, but it sounds like you can filter the RDD
first, and then map it. This way the modified RDD will be smaller and you
will have less updates.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Consistency-Guarantees-Smart-Updates-with-Spark-Integration-tp10091p10121.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to