Denis, i get that about when to scale. And with my test, these are fairly
large boxes with lots of head room. Hardly doing 10% or less CPU wise.
But why do i see a performance difference when i put data into 1 node vs 3
node with PRIMARY_SYNC turned on.
Shouldn't the behavior and workflow be the
Each node process uses many threads, thread pools, and other resources to
process the app's requests as well as for internal needs. Thus once a
single node cluster reaches its maximum potential we need to scale. That's
for the 1 vs 3 nodes setup question.
-
Denis
On Thu, Dec 12, 2019 at 5:54 AM
I am aware of those nuances around distributed systems.
What i am trying to understand is, with Sync mode as PRIMARY_SYNC, where the
response does not wait for updates to backups, so long as primary node is
updated. With this setting why should 1 node vs 3 node matter?
--
Sent from:
This is a very common pitfall with distributed systems - comparing 1 node
vs 3 nodes. In short, this is not correct to compare them.
When you write to one node each write does the following:
1) client sends the request to the server
2) server updates data
3) server sends the response to the
Any pointers to understand this behavior?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Folks,
I am doing a putAll test with a simple Employee Pojo, stored as binary. The
cache is configured with,
Atomicity Mode = Transactional
Write Sync Mode = Full Sync
Backup Count - 1
Deployment config is, 2 large linux boxes,
Box 1 - 3 server nodes
Box 2 - 1 client node
500k load with