Hi,

As part of your test environment you should use two or more servers, as the
basic design of Geode is around distribution across a cluster of servers.
In a test it is best to use distribution with at least two servers as your
client will be able to distribute it's load across the two different
members and although in a single host test setup you may not see
significant initial improvements but as you scale for production loads
across multiple machines you will get performance and stability benefits
from having tested with distribution from the start.

The conserve sockets  parameter effects the way servers communicate between
themselves by opening additional sockets between the peer/server nodes for
handling updates, so setting this parameter doesn't have as much impact
with a single server test.

You may need to do virtualization and OS tuning to get the full impact,
increasing the socket buffer sizes should help if you haven't done this
already...

*Vince Ford*
GemFire Toolsmith Engineering
Beaverton, OR USA
http://www.pivotal.io
Open Source Project Geode https://geode.incubator.apache.org/
<https://network.pivotal.io/products/project-geode>

On Fri, Jun 9, 2017 at 9:37 AM, Michael Stolz <[email protected]> wrote:

> Try doing just raw pipe communications between a bunch of processes to see
> if you can pin the cpu.
>
> # cat somebigfile | cat | cat... >/dev/null
>
> If that can't pin cpu then you know it's the operating system.
>
>
> --
> Mike Stolz
> Principal Engineer - Gemfire Product Manager
> Mobile: 631-835-4771 <(631)%20835-4771>
>
> On Jun 9, 2017 9:22 AM, "Xu, Nan" <[email protected]> wrote:
>
> Thanks for pointing this out, very useful.
>
>
>
> I change the way I test the performance and get a result I can NOT explain.
>
>
>
> I use 2 separate virtual machine.  1 run client, 1 run server, both siting
> on the same physical box.
>
>
>
> The client put small message (about 10 bytes) as quick as possible through
> 128 threads. Both client and server have the conserve_socket=false
>
>
>
> I can see there are 128 tcp connections between them and I send about
> 50,000 message/s
>
>
>
> Server have 4 core and 3 out of 4 is constantly 100%, but one core is only
> 30%. On the server, I only run 1 locator and 1 server and no other
> program.  Region is PARTITION. I publish about 2000 keys.
>
>
>
> Why there is a core only 30%. My Point is, if I can use the last core
> more, I might able to publisher even quicker.
>
>
>
> Thanks,
>
> Nan
>
>
>
> *From:* Akihiro Kitada [mailto:[email protected]]
> *Sent:* Thursday, June 08, 2017 7:13 PM
> *To:* [email protected]
> *Subject:* Re: Geode Performance how to push the cpu to 100%?
>
>
>
> Hello Nan,
>
>
>
> Why don't you check Geode statistics
>
>
>
> http://geode.apache.org/docs/guide/11/reference/statistics/s
> tatistics_list.html
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__geode.apache.org_docs_guide_11_reference_statistics_statistics-5Flist.html&d=DwMFaQ&c=SFszdw3oxIkTvaP4xmzq_apLU3uL-3SxdAPNkldf__Q&r=HB5LZowSGF4DiMmOUsCX6Q&m=pJzOQaxj-UphjOo5W_YGcoqC10MXjGdGZhv2XJV5O74&s=A8tHBTyifwAGQtQwFPmi2z75-p7wg9WcJAOGSENZvyc&e=>
>
>
>
> Maybe, disk I/O or some other causes could be involved...
>
>
>
>
>
>
> --
>
> Akihiro Kitada  |  Staff Customer Engineer |  +81 80 3716 3736
> <+81%2080-3716-3736>
> Support.Pivotal.io
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__support.pivotal.io_&d=DwMFaQ&c=SFszdw3oxIkTvaP4xmzq_apLU3uL-3SxdAPNkldf__Q&r=HB5LZowSGF4DiMmOUsCX6Q&m=pJzOQaxj-UphjOo5W_YGcoqC10MXjGdGZhv2XJV5O74&s=KlPLE6HUp_oymyeU0phGa7yGv1y0AI9a37IzLaxgqvE&e=>
>   |  Mon-Fri  9:00am to 5:30pm JST  |  1-877-477-2269 <(877)%20477-2269>
> [image: support]
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__support.pivotal.io_&d=DwMFaQ&c=SFszdw3oxIkTvaP4xmzq_apLU3uL-3SxdAPNkldf__Q&r=HB5LZowSGF4DiMmOUsCX6Q&m=pJzOQaxj-UphjOo5W_YGcoqC10MXjGdGZhv2XJV5O74&s=D8zlpynD1sHe0VhZffs1DV9myytvY4nswTaVL_QeIfA&e=>
>  [image: twitter]
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_pivotal&d=DwMFaQ&c=SFszdw3oxIkTvaP4xmzq_apLU3uL-3SxdAPNkldf__Q&r=HB5LZowSGF4DiMmOUsCX6Q&m=pJzOQaxj-UphjOo5W_YGcoqC10MXjGdGZhv2XJV5O74&s=MxtrJ8TClgbIiOLjX1LfZWiX5DvIqKSm98HD2q_5Vt4&e=>
>  [image: linkedin]
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_company_3048967&d=DwMFaQ&c=SFszdw3oxIkTvaP4xmzq_apLU3uL-3SxdAPNkldf__Q&r=HB5LZowSGF4DiMmOUsCX6Q&m=pJzOQaxj-UphjOo5W_YGcoqC10MXjGdGZhv2XJV5O74&s=jLQapJvEDrWcoj9wbQJv6BfF8POND-BdcjO8OcltqZU&e=>
>  [image: facebook]
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.facebook.com_pivotalsoftware&d=DwMFaQ&c=SFszdw3oxIkTvaP4xmzq_apLU3uL-3SxdAPNkldf__Q&r=HB5LZowSGF4DiMmOUsCX6Q&m=pJzOQaxj-UphjOo5W_YGcoqC10MXjGdGZhv2XJV5O74&s=40iw0ItXxQdUmXtC1EwHZDQd9YSUcAJmmxseZJGByIk&e=>
>  [image: google plus]
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__plus.google.com_-2BPivotal&d=DwMFaQ&c=SFszdw3oxIkTvaP4xmzq_apLU3uL-3SxdAPNkldf__Q&r=HB5LZowSGF4DiMmOUsCX6Q&m=pJzOQaxj-UphjOo5W_YGcoqC10MXjGdGZhv2XJV5O74&s=kr2r2DNQWN972jvd5YSJV6Zbsr1FBUtPS-cHUOLF-Eg&e=>
>  [image: youtube]
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_playlist-3Flist-3DPLAdzTan-5FeSPScpj2J50ErtzR9ANSzv3kl&d=DwMFaQ&c=SFszdw3oxIkTvaP4xmzq_apLU3uL-3SxdAPNkldf__Q&r=HB5LZowSGF4DiMmOUsCX6Q&m=pJzOQaxj-UphjOo5W_YGcoqC10MXjGdGZhv2XJV5O74&s=X3efwi-zDaQQNnXPfr6HjHdoUQp-esDmJhY1t_XU8LM&e=>
>
>
>
>
>
> 2017-06-09 6:09 GMT+09:00 Xu, Nan <[email protected]>:
>
> Hi,
>
>
>
>    I am trying to see the limit of geode performance.
>
>
>
> Here is what I did.
>
>
>
> Single machine:  OS: centos 7.     8 core.   2.6G.
>
>
>
> Create a single locator  and a single server and a single region.
>
>
>
> Only configuration is
>
> Server conserve-sockets= false
>
> Region is    PARTITION
>
> Client and server running on the same machine.
>
>
>
>
>
> In my client.  I setup a 16 thread pool to get data.  But I can only push
> the cpu to around 90—93% on the centos.
>
>
>
> Why I cannot push it to 100%? I am suspecting
>
> 1.       The tcp connection between the server and client is not fast
> enough. Maybe increase the number of tcp connection?  I only see one
> connection between client and server.
>
> 2.       There are some lock at the server?  I realize that I can push
> the cpu from 50% to 90 by just adding the setPoolThreadLocalConnections(true),
> so maybe there are some other setting I am missing.
>
>
>
> Thanks,
>
> Nan
>
>
>
> Client side program.
>
>
>
> val cache: ClientCache = new ClientCacheFactory().addPoolLocator(host,
> 10334)
>
>     .set("log-level", "info")
>
>     .set("conserve-sockets", "false")
>
>     .setPoolMinConnections(4)
>
>     .setPoolMaxConnections(12)
>
>     .setPoolThreadLocalConnections(true)
>
>     .create
>
>
>
>   val regionFactory: ClientRegionFactory[String, String] =
> cache.createClientRegionFactory(ClientRegionShortcut.PROXY)
>
>   val region1: Region[String, String] = regionFactory.create(region)
>
>
>
>   implicit val ec = ExecutionContext.fromExecutorS
> ervice(Executors.newFixedThreadPool(16))
>
>   var j = new AtomicLong(0)
>
>   for (i <- 1 to 16) yield Future {
>
>     while(true){
>
>       val cj = j.addAndGet(1)
>
>       region1.get("" + (rnd.nextInt(2000) + 1))
>
>     }
>
>   }
>
>
>
>
>
>
> ------------------------------
>
> This message, and any attachments, is for the intended recipient(s) only,
> may contain information that is privileged, confidential and/or proprietary
> and subject to important terms and conditions available at
> http://www.bankofamerica.com/emaildisclaimer. If you are not the intended
> recipient, please delete this message.
>
>
> ------------------------------
> This message, and any attachments, is for the intended recipient(s) only,
> may contain information that is privileged, confidential and/or proprietary
> and subject to important terms and conditions available at
> http://www.bankofamerica.com/emaildisclaimer. If you are not the intended
> recipient, please delete this message.
>
>
>

Reply via email to