Hi!

The numbers sound very low, I run on hardware close to yours (3 nodes (X5660*5) and 1 client), and I get way more than 1500/sec, not sure how much, I will have to check, but as long as you do single get's there is not so much you can do, each get will be one roundtrip over the network, and with single get's latency can have a huge impact, I modified my code and most of the time I cache all get's over 100ms into a getAll and that makes a huge impact on performance.

Not that much to change in configuration, number of backups don't have much impact on reads (unless you do replicated of course)

I am not sure how the traffic works but if there is only one tcp connection to each node you will not have much use for more than 3 threads I would think.

Did you read 500K unique entries or the same multiple times ?

Mikael

Den 2019-11-26 kl. 21:38, skrev Victor:
I am running some comparison tests (ignite vs cassandra) to check how to
improve the performance of 'get' operation. The data is fairly
straightforward. A simple Employee Object(10 odd fields), being stored as
BinaryObject in the cache as

IgniteCache<String, BinaryObject> empCache;

The cache is configured with, Write Sync Mode - FULL_SYNC, Atomicity -
TRANSACTIONAL, Backup - 1 & Persistence - Enabled

Cluster config, 3 server + 1 client node. Setup on 2 machine, server machine
(Intel(R) Xeon(R) CPU X5675  @ 3.07GHz) & client machine (Intel(R) Xeon(R)
CPU X5560  @ 2.80GHz).

Client has multiple threads(configurable) making concurrent 'get' calls.
Using 'get' on purpose due to use case requirements.

For about 500k request, i am getting a throughput of about 1500/sec. Given
all of the data is in off-heap with cache hits percentage = 100%.
Interestingly with Cassandra i am getting a similar performance, with key
Cache and limited row cache.
I've tried running with 10/20/30 threads, the performance is more/less same.

Letting the defaults for most of the Data configuration. For this test i
turned the persistence off. Ideally for get's it shouldn't really matter.
The performance is the same.

============================================
Data Regions Configured:
[19:35:58]   ^-- default [initSize=256.0 MiB, maxSize=14.1 GiB,
persistence=false]

Topology snapshot [ver=4, locNode=038f99b3, servers=3, clients=1,
state=ACTIVE, CPUs=40, offheap=42.0GB, heap=63.0GB]
============================================

Additionally ran top on both the machines to check if they are hitting the
resources,
------ Server
PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
14159 root      20   0   29.7g   3.2g  15216 S  10.3  4.5   1:35.69 java
14565 root      20   0   29.4g   2.9g  15224 S   8.3  4.2   1:33.41 java
13770 root      20   0   30.0g   2.9g  15184 S   6.3  4.2   1:36.99 java

----- Client
3731 root      20   0   27.8g   1.1g  15304 S 136.5  1.5   2:39.16 java

As you can see everything is well under.

Frankly, i was expecting Ignite gets to be pretty fast, given all data is in
cache. Atleast looking at this test
https://www.gridgain.com/resources/blog/apacher-ignitetm-and-apacher-cassandratm-benchmarks-power-in-memory-computing
<https://www.gridgain.com/resources/blog/apacher-ignitetm-and-apacher-cassandratm-benchmarks-power-in-memory-computing>

Planning to run one more test tomorrow with no-persistence and setting near
cache (on heap) to see if it helps.

Let me know if you guys see any obvious configurations that should be set.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to