It's quite easy to starve Ignite thread pools once you start to use the
asynchronous API and listeners extensively. There wouldn't be built-in
starvation detection in Ignite otherwise, I guess...
What is worse, the starvation may manifest it self only under heavy load and
only in a cluster.
That sounds very useful for a "what not to do example", could you please
give a little more detail (big lines) on how the business code could starve
the Ignite thread pool? And if using entry processors, how come the
operations were not executed atomically - i.e. what made the race condition
Hi Ilya,
I have tracked down this issue to a racy behavior in the business code
and Ignite thread pool starvation caused by the application code.
Sorry for the false alarm.
---
S pozdravom,
Kamil Mišúth
On 2019-05-22 18:46, Ilya Kasnacheev wrote:
Hello!
Do you have reproducer for this
Hello!
Do you have reproducer for this behavior? Have you tried the same scenario
on 2.7? I doubt anyone will take effort to debug 2.6.
Regards,
--
Ilya Kasnacheev
чт, 25 апр. 2019 г. в 18:59, kimec.ethome.sk :
> Greetings,
>
> we've been chasing a weird issue in a two node cluster for few
Greetings,
we've been chasing a weird issue in a two node cluster for few days now.
We have a spring boot application bundled with an ignite server node.
We use invokeAsync on TRANSACTIONAL PARTITIONED cache with 1 backup. We
assume that each node in the two node cluster has a copy of the