Ricky, Does you mean you try to update same two entries in all 50_000 transactions? Is it possible to collocate this entries?
On Thu, Aug 31, 2017 at 5:56 AM, rnauv <ricky.nauva...@sci.ui.ac.id> wrote: > I want to make 2 clients concurrently calculate some values from a cache > that > read from a database, and store the result in the database. I can't use > atomic cacheAtomicityMode as there is no lock so that the calculation might > get an old value. I also need a backup to avoid lost cache when a node > fail. > Let's say I run 2 servers, and 2 clients. Right now, I'm able to do 50_000 > transactions for two rows in 70 seconds (around 1400 TPS). If it's > possible, > I want to increase the TPS. > > My initial configuration is like this: > Server configuration: > 1. CacheAtomicityMode = transactional > 2. Read Through, Write Through, and Write Behind = true > 3. Cache Mode = partitioned > 4. Backup = 1 > 5. Query entities for my table > 6. cacheStoreFactory for my CacheStore > 7. DiscoverySPI which I set to my localhost > > Client configuration have the same configuration as the server excluding > ReadThrough, WriteThrough, WriteBehind, QueryEntities, and > CacheStoreFactory. > > The Transaction Concurrency is pessimistic while transaction isolation is > repeatable_read. > I didn't set Write synchronization mode (which by default will be > PRIMARY_SYNC). > > The transaction procedure is similar with the one in Ignite Documentation > [1]. > > Any suggestions? > > Thanks, > Ricky > > [1] https://apacheignite.readme.io/docs/transactions > > > > ----- > -- Ricky > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ > -- Best regards, Andrey V. Mashenkov