A read before write is always going to be tremendously more than just writing. Depending on your architecture you may consider both of the options described.
If you have a CQRS architecture and are processing an event queue — doing LWT / read before write , then your “write” is processed asynchronously with YOUR command professor. If you are directly doing interactions with Cassandra, and need extremely fast writes with no latency, I’d do append only method. CQRS just separates the event processing from the reading — and when combined with asynchronous architecture in your application such as an event queue — basically mitigates / hedged performance loss in doing LWT. You can always use CQRS without LWT. Rahul On Jun 21, 2018, 4:38 AM -0400, Jacques-Henri Berthemet <jacques-henri.berthe...@genesys.com>, wrote: > Hi, > > Another way would be to make your PK a clustering key with Id as PK and time > as clustering with type TimeUUID. Then you’ll always insert records, never > update, for each “transaction” you’ll keep a row in the partition. Then when > you’ll read all the rows for that partition by Id, you’ll process all of them > to know the real status. For example, if final status must be “completed” and > you have: > > Id, TimeUUI, status > 1, t0, added > 1, t1, added > 1, t2, completed > 1, t3, added > > When reading back you’ll just discard the last row. > > > If you’re only concerned about “insert or update” case but the data is > actually the same you can always insert. If you insert on an existing record > it will just overwrite it, if you update without an existing record it will > insert data. In Cassandra there is not much difference between insert and > update operations. > > Regards, > -- > Jacques-Henri Berthemet > > From: Rajesh Kishore [mailto:rajesh10si...@gmail.com] > Sent: Thursday, June 21, 2018 7:45 AM > To: email@example.com > Subject: Re: how to avoid lightwieght transactions > > Hi, > > I think LWT feature is introduced for your kind of usecases only - you don't > want other requests to be updating the same data at the same time using Paxos > algo(2 Phase commit). > So, IMO your usecase makes perfect sense to use LWT to avoid concurrent > updates. > If your issue is not the concurrent update one then IMHO you may want to > split this in two steps: > - get the transcation_type with quorum factor (or higher consistency level) > - And conditionally update the row with with quorum factor (or higher > consistency level) > But remember, this wont be atomic in nature and wont solve the concurrent > update issue if you have. > > Regards, > Rajesh > > > > On Wed, Jun 20, 2018 at 2:59 AM, manuj singh <s.manuj...@gmail.com> wrote: > > quote_type > > Hi all, > > we have a use case where we need to update frequently our rows. Now in > > order to do so and so that we dont override updates we have to resort to > > lightweight transactions. > > Since lightweight is expensive(could be 4 times as expensive as normal > > insert) , how do we model around it. > > > > e.g i have a table where > > > > CREATE TABLE multirow ( > > id text, > > time text, > > transcation_type text, > > status text, > > PRIMARY KEY (id, time) > > ) > > > > So lets say we update status column multiple times. So first time we update > > we also have to make sure that the transaction exists otherwise normal > > update will insert it and then the original insert comes in and it will > > override the update. > > So in order to fix that we need to use light weight transactions. > > > > Is there another way i can model this so that we can avoid the lightweight > > transactions. > > > > > > Thanks > > >