Thanks Denis.

From: Denis Magda [mailto:dma...@gridgain.com]
Sent: Friday, June 24, 2016 1:00 AM
To: user@ignite.apache.org
Subject: Re: performance issues

Hi Pradeep,

Member-member (server-server) performance is better than client-server one in 
the scenario because in the first case roughly a half of data will be stored on 
the server that executes the benchmark (meaning that there won’t be I/O at 
all). While in case with client-server case the client always has to send the 
data to a remote server.

In fact if you run more servers on different physical machines and start the 
benchmark using the client node the performance should be better rather with 
the configuration with less servers.

—
Denis

On Jun 22, 2016, at 6:06 PM, Pradeep Badiger 
<pradeepbadi...@fico.com<mailto:pradeepbadi...@fico.com>> wrote:

Hi Denis,

Thanks for your response. I ran the benchmark test on a single VM with 8 cores 
and 2Gb allocated to each server instance (member). All the tests were for 
optimistic transactions with no backup and with the default JVM settings in 
benchmark.properties. I see that the performance of client-member is lower than 
the member-member setup. The client was configured using –client in the 
benchmark.properties. I have attached the configuration for 1 client 1 server 
mode. I was trying to run it on a single server with no clients but I was not 
able to configure that in the benchmark test. I am not sure if it is worth to 
test that scenario, as all the calls would be local and latency would be lot 
less.

Please let me know if the member-member performance is lot better compared to 
client-member performance.

Clients

Servers

Threads

Latency (ms)

Min

Avg

Max

0

2

8

0.341

0.451

0.881

0

2

8

0.362

0.470

0.874

1

2

8

0.695

0.749

1.070

1

1

8

0.576

0.726

1.102


Thanks,
Pradeep V.B.

From: Denis Magda [mailto:dma...@gridgain.com]
Sent: Thursday, June 16, 2016 6:46 PM
To: user@ignite.apache.org<mailto:user@ignite.apache.org>
Subject: Re: performance issues

Hi,

In a single server mode (embedded) there is no I/O (networking) at all. All the 
requests are executed locally. When you use the client node you’ll have I/O 
delays. The performance here can depend on several factors:
- latency and throughput of your network;
- CPU saturation;

So take a look at these system resources usage.

Also make sure that there are no long GC pauses or that GC Threads don’t 
consume much of CPUs. Refer to this doc for JVM tuning settings
https://apacheignite.readme.io/docs/jvm-and-system-tuning

Finally, share your benchmark source and configuration for validation.

—
Denis

On Jun 16, 2016, at 11:04 PM, Pradeep Badiger 
<pradeepbadi...@fico.com<mailto:pradeepbadi...@fico.com>> wrote:

Hi,

I am trying to run the yardstick ignite benchmark test on my local VM having 8 
Cores and 16GB RAM. I could see that the performance of Optimistic PUT/GET is 
way low for a client-server mode than what I see when running within one single 
server (embedded mode). Also the performance degrades with 1 additional server 
node. Can someone help me to optimize this?

There was a blog where the author commented that the performance of the product 
is much similar when run in both client and server. Am I missing something here?

https://gridgain.blogspot.com/2015/04/benchmarking-data-grids-apache-ignite.html?showComment=1466104423051


Thanks,
Pradeep V.B.
This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.

This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it 
immediately.<benchmark.properties>

This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.

Reply via email to