Hello Antonio! I canonly observe 'timeout' checkpoints which is good news - you are not running out of checkpoint buffer.
Otherwise, maybe you are hitting actual performance limit, i.e., your system is saturated for good at this point. What is total amount of data per node at this time? What's the size of your cache entry? I have also noted that your 2 nodes have wildly different config. The first one has around 39G of data region, the second one has 8G. Regards, -- Ilya Kasnacheev пт, 1 мар. 2019 г. в 17:19, Antonio Conforti <[email protected]>: > Hello Ilya. > > I ran again the test from scratch with fixed rate at 4000 msg/sec. with the > environment variable IGNITE_MAX_INDEX_PAYLOAD_SIZE=66 and Cache: > > 1) PARTITIONED > 2) TRANSACTIONAL > 3) persistence enabled > 4) backup=0 > 5) indexes on key and value > 6) Data region 8 GB > 7) Checkpoint buffer size 2 GB > 8) WAL mode LOG_ONLY > 9) WAL archive disabled > 10)Pages Writes Throttling enabled) > > During the test I could observe the total checkpoint elapsed grew up and > the > platform processed all the messages without queueing. The throttling was > logged in every checkpoint. > After about 18 milions of entries the performance slowdown and queues > formed. The total checkpoint elapsed exceeded the checkpoint timeout > (default). > A lot of "Critical system error detected" were logged and the platform > never > recovered the performance. > I observed also the data region was filled after the perfomance slowdown > and > the checkpoint elapsed exceeded the timeout. > > In attach the logs and configuration files. > > log_ignite_190301_HOST1.gz > < > http://apache-ignite-users.70518.x6.nabble.com/file/t2315/log_ignite_190301_HOST1.gz> > > log_ignite_190301_HOST2.gz > < > http://apache-ignite-users.70518.x6.nabble.com/file/t2315/log_ignite_190301_HOST2.gz> > > > Waiting your suggestions let me know if may be usueful a reproducer project > for the slow down of performance that i observed > > Thanks. > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
