Hi Naveen,

We currently have two data regions. A small one (for ingest), set to 128
Mb, and a larger one for requests (4Gb). We leave the checkpoint page
buffer size at the default value, so this will be 1Gb for the larger
region, and possibly 128Mb for the smaller region (if I recall the rules
correctly). I'm planning to combine them to improve checkpointing behaviour.

I guess what you're saying is by setting the buffer size smaller, you cap
the volume of WAL updates that need to be applied when restarting your
nodes?

Sounds like something worth trying...

I guess the risk here is that if the number of dirty pages hits that limit
during a checkpoint then Ignite will block writes until that check point is
complete, and schedule another checkpoint immediately after the first has
completed. Do you see that occurring in your system?

Thanks,
Raymond

On Wed, Jan 13, 2021 at 7:52 PM Naveen <naveen.band...@gmail.com> wrote:

> Hi Raymond
>
> Did you try checkpointPageBufferSize instead of time interval, we have used
> 24MB as checkpointPageBufferSize , working fine for us, we also have close
> to 12 TB of data and does take good 6 to 10 mts to bring up the node and
> become cluster active
> Regarding the no of partitions also, 128 partitions should do and its doing
> good for us
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
<http://www.trimble.com/>
Raymond Wilson
Solution Architect, Civil Construction Software Systems (CCSS)
11 Birmingham Drive | Christchurch, New Zealand
raymond_wil...@trimble.com

<https://worksos.trimble.com/?utm_source=Trimble&utm_medium=emailsign&utm_campaign=Launch>

Reply via email to