Hi, I'll have a try.
Thank you. On Wed, Jul 4, 2018 at 6:22 PM, Ilya Kasnacheev <[email protected]> wrote: > Hello! > > You can try increasing number of threads in REST thread pool by setting > igniteConfiguration.setConnectorConfiguration(new > ConnectorConfiguration().setThreadPoolSize(64)) > Or the corresponding Spring XML. > > This is as per https://apacheignite.readme.io/docs/rest-api > > Regards, > > -- > Ilya Kasnacheev > > 2018-07-04 12:04 GMT+03:00 胡海麟 <[email protected]>: >> >> Hi, >> >> Here is the thread dump. >> >> Thank you. >> >> On Wed, Jul 4, 2018 at 5:52 PM, Ilya Kasnacheev >> <[email protected]> wrote: >> > Hello! >> > >> > Can you provide the thread dump collected when the system is under peak >> > load? >> > >> > I think it's some other thread pool, such as client pool or management >> > pool, >> > but have to take a look at the thread dump to be sure. >> > >> > Regards, >> > >> > -- >> > Ilya Kasnacheev >> > >> > 2018-07-04 11:33 GMT+03:00 胡海麟 <[email protected]>: >> >> >> >> Hi, >> >> >> >> We use ignite as a redis server. >> >> >> >> The use case is >> >> a. Write timeout is 15ms on the client side. >> >> b. 2 server nodes. each is an EC2 r4.4xlarge instance. >> >> c. Write req/s is about 120,000. In another word, 60,000 for each node. >> >> >> >> The problem is that timeout happens frequently, several ones per >> >> second. >> >> A lower write req/s results less timeout. I guest we have bottleneck >> >> somewhere. >> >> >> >> ========== >> >> $ tail -f >> >> /opt/apache-ignite-fabric-2.5.0-bin/work/log/ignite-ee4f25ed.0.log >> >> | grep pool >> >> ^-- Public thread pool [active=0, idle=0, qSize=0] >> >> ^-- System thread pool [active=0, idle=16, qSize=0] >> >> ========== >> >> system thread pool seems not busy at all. >> >> >> >> ========== >> >> $ tail -f >> >> /opt/apache-ignite-fabric-2.5.0-bin/work/log/ignite-ee4f25ed.0.log >> >> | grep "CPU " >> >> ^-- CPU [cur=14.77%, avg=6.21%, GC=0%] >> >> ^-- CPU [cur=13.43%, avg=6.23%, GC=0%] >> >> ========== >> >> CPU is not busy, either. >> >> >> >> We expected milli second level performance and we have too many timeout >> >> now. >> >> Any idea for optimizing the performance? >> >> >> >> Thanks. >> > >> > > >
