Hi Devin,

MaxSize is set per node for the specified data region.
A few clarifying questions:
What cache configuration are you using? The performance could depend on
the type of a cache and the number of backups.
How many clients and threads are writing to the cluster? Because it is a
possible bottleneck on the client side.
Have you tried  IgniteDataStreamer[2]? It's recommended approach to load
the massive amount of data.

You could refer to persistence tuning documentation[1].

[1] https://ignite.apache.org/docs/latest/persistence/persistence-tuning
[2] https://ignite.apache.org/docs/latest/data-streaming#data-streaming
Thanks,
Pavel

пт, 13 нояб. 2020 г. в 07:26, Devin Bost <[email protected]>:

> We're trying to figure out how to get more throughput from our Ignite
> cluster.
> We have 11 dedicated Ignite VM nodes, each with 32 GB of RAM. Yet, we're
> only writing at 2k/sec max, even when we parallelize the writes to Ignite.
> We're using native persistence, but it just seems way slower than expected.
>
> We have our cache maxSize set to 16 GB. Since the cluster has a total of
> 352 GB of RAM, should our maxSize be set per node, or per cluster? If it's
> per cluster, then I'm thinking we'd want to increase maxSize to at least
> 300 GB. Looking for feedback on this.
>
> We've already turned off swapping on the boxes.
>
> What else can we tune to increase throughput? It seems like we should be
> getting 100x the throughput at least, so I'm hoping something is just
> misconfigured.
>
> Devin G. Bost
>


-- 

Regards

Pavel Vinokurov

Reply via email to