[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: how to avoid lightwieght transactions

A read before write is always going to be tremendously more than just writing. Depending on your architecture you may consider both of the options described.

If you have a CQRS architecture and are processing an event queue — doing LWT / read before write , then your “write” is processed asynchronously with YOUR command professor.

If you are directly doing interactions with Cassandra, and need extremely fast writes with no latency, I’d do append only method.

CQRS just separates the event processing from the reading — and when combined with asynchronous architecture in your application such as an event queue — basically mitigates / hedged performance loss in doing LWT.

You can always use CQRS without LWT.

On Jun 21, 2018, 4:38 AM -0400, Jacques-Henri Berthemet <jacques-henri.berthemet@xxxxxxxxxxx>, wrote:



Another way would be to make your PK a clustering key with Id as PK and time as clustering with type TimeUUID. Then youll always insert records, never update, for each transaction youll keep a row in the partition. Then when youll read all the rows for that partition by Id, youll process all of them to know the real status. For example, if final status must be completed and you have:


Id, TimeUUI, status

1, t0, added

1, t1, added

1, t2, completed

1, t3, added


When reading back youll just discard the last row.



If youre only concerned about insert or update case but the data is actually the same you can always insert. If you insert on an existing record it will just overwrite it, if you update without an existing record it will insert data. In Cassandra there is not much difference between insert and update operations.




Jacques-Henri Berthemet


From: Rajesh Kishore [mailto:rajesh10sinha@xxxxxxxxx]
Sent: Thursday, June 21, 2018 7:45 AM
To: user@xxxxxxxxxxxxxxxxxxxx
Subject: Re: how to avoid lightwieght transactions




I think LWT feature is introduced for your kind of usecases only -  you don't want other requests to be updating the same data at the same time using Paxos algo(2 Phase commit).

So, IMO your usecase makes perfect sense to use LWT to avoid concurrent updates.

If your issue is not the concurrent update one then IMHO you may want to split this in two steps:

- get the transcation_type with quorum factor (or higher consistency level)

-  And conditionally update the row with with quorum factor (or higher consistency level)

But remember, this wont be atomic in nature and wont solve the concurrent update issue if you have.




On Wed, Jun 20, 2018 at 2:59 AM, manuj singh <s.manuj545@xxxxxxxxx> wrote:

Hi all,

we have a use case where we need to update frequently our rows. Now in order to do so and so that we dont override updates we have to resort to lightweight transactions. 

Since lightweight is expensive(could be 4 times as expensive as normal insert) , how do we model around it.


e.g i have a table where 


CREATE TABLE multirow (

    id text,

    time text,

    transcation_type text,

    status text,

    PRIMARY KEY (id, time)


So lets say we update status column multiple times. So first time we update we also have to make sure that the transaction exists otherwise normal update will insert it and then the original insert comes in and it will override the update.

So in order to fix that we need to use light weight transactions.


Is there another way i can model this so that we can avoid the lightweight transactions.