Thanks. Will check Scalar DB.

I avoid b

Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10

From: Hiroyuki Yamada<mailto:mogwa...@gmail.com>
Sent: 14 July 2020 09:26
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: design principle to manage roll back

As one of the options, you can use (Logged) batch for kind of atomic mutations.
I said, "kind of" because it is not really atomic when mutations span
multiple partitions.
More specifically, the mutations go to all the nodes eventually so
intermediate states can be observed and there is no rollback.
https://docs.datastax.com/en/cql-oss/3.3/cql/cql_using/useBatch.html

You can do similar things by applications' side by making write
operations idempotent and
retry them until they are all succeeded.

As a last resort, you can use Scalar DB.
https://github.com/scalar-labs/scalardb
When you do operations through Scalar DB to Cassandra,
you can achieve ACID transactions on Cassandra.
If there is a failure during a transaction, it will be properly
rollbacked or rollforwarded based on the transaction states.

Hope it helps.

Thanks,
Hiro

On Tue, Jul 14, 2020 at 4:55 PM Manu Chadha <manu.cha...@hotmail.com> wrote:
>
> Thanks. Actually none of my data is big data. I just thought not to use 
> traditional RDBMS for my project. Features like replication, fast read and 
> write, always ON, scalability appealed well to me. I am also happy with 
> eventual consistency.
>
>
>
> To be honest, I feel there has to be a way because if Cassandra promotes data 
> duplication by creating a table for each query then there should be a way to 
> keep duplicate copies consistent.
>
>
>
>
>
> Sent from Mail for Windows 10
>
>
>
> From: onmstester onmstester
> Sent: 14 July 2020 08:04
> To: user
> Subject: Re: design principle to manage roll back
>
>
>
> Hi,
>
>
>
> I think that Cassandra alone is not suitable for your use case. You can use a 
> mix of Distributed/NoSQL (to storing single records of whatever makes your 
> input the big data) & Relational/Single Database (for transactional non-big 
> data part)
>
>
>
> Sent using Zoho Mail
>
>
>
>
>
> ---- On Tue, 14 Jul 2020 10:47:33 +0430 Manu Chadha <manu.cha...@hotmail.com> 
> wrote ----
>
>
>
>
>
> Hi
>
>
>
> What are the design approaches I can follow to ensure that data is consistent 
> from an application perspective (not from individual tables perspective). I 
> am thinking of issues which arise due to unavailability of rollback or 
> executing atomic transactions in Cassandra. Is Cassandra not suitable for my 
> project?
>
>
>
> Cassandra recommends creating a new table for each query. This results in 
> data duplication (which doesn’t bother me). Take the following scenario. An 
> application which allows users to create, share and manage food recipes. Each 
> of the function below adds records in a separate database
>
>
>
> for {savedRecipe <- saveInRecipeRepository(...)
>
>                recipeTagRepository <- saveRecipeTag(...)
>                partitionInfoOfRecipes <- savePartitionOfTheTag(...)
>                updatedUserProfile <- updateInUserProfile(...)
>                recipesByUser <- saveRecipesCreatedByUser(...)
>                supportedRecipes <- updateSupportedRecipesInformation(tag)}
>
>
>
> If say updateInUserProfile fails, then I'll have to manage rollback in the 
> application itself as Cassandra doesn’t do it. My concerns is that the 
> rollback process could itself fail due to network issues say.
>
>
>
> Is there a recommended way or a design principle I can follow to keep data 
> consistent?
>
>
>
> Thanks
>
> Manu
>
>
>
> Sent from Mail for Windows 10
>
>
>
>
>
>
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org

Reply via email to