Possible starvation in striped pool in one of the node

2022-06-08 Thread Lo, Marcus
=org.apache.ignite.internal.util.typedef.G >>> Possible starvation in striped pool. Thread name: sys-stripe-7-#8%Ignite% Queue: [Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8, ordered=false, timeout=0, skipOnTimeout=false, msg=GridDhtAtomicSingleUpda

Re: Starvation in striped pool

2019-10-18 Thread Ilya Kasnacheev
Hello! We, we had IGNITE_ENABLE_FORCIBLE_NODE_KILL, but the best solution is, in my opinion, to avoid adding anything unstable to the cluster. Regards, -- Ilya Kasnacheev пт, 18 окт. 2019 г. в 08:35, ihalilaltun : > Hi Ilya, > > From time to time, we have faced exactly the same problem. Is

Re: Starvation in striped pool

2019-10-17 Thread ihalilaltun
Hi Ilya, >From time to time, we have faced exactly the same problem. Is there any best practices for handling network issues? What i mean is, if there is any network issues between client/s and server/s we want the cluster keeps living. As for the clients, they can be disconnected from servers.

Re: Starvation in striped pool

2019-10-11 Thread Ilya Kasnacheev
Hello! I don't think there is any problem with idleConnectionTimeout, but you *should not* use nodes which are not mutually connectible to each other anyway. I can't really comment on the feasibility of dropping client when it can't be reached via Communication. You can start a discussion about

Re: Starvation in striped pool

2019-10-11 Thread maheshkr76private
>>I'm almost certain is that problem is that server node cannot open a connection to client node (and while it tries, it will reject connection attempts from client node) The default idleTimeout of TCP communication spi is 6 minutes. So I assume, after this timeout, the connection is closed and

Re: Starvation in striped pool

2019-10-10 Thread Ilya Kasnacheev
Hello! I'm almost certain is that problem is that server node cannot open connection to client node (and while it tries, it will reject connection attempts from client node) clientReconnectDisabled=true will only concern discovery. In your case, there's no problems with discovery, the problem is

Re: Starvation in striped pool

2019-10-09 Thread maheshkr76private
Ilya. What is most mysterious to me is, I disabled reconnect of think client (clientReconnectDisabled=true). Still the server prints, the below, where the same thick client is making an immediate attempt to reconnect back to the cluster, while the previous connecting isn't still successful.

Re: Starvation in striped pool

2019-10-09 Thread maheshkr76private
Attached are the logs. In the server log, you will see the thick client continuously pinging server indefinitely...there is no recovery of the thick client. SO, the problem is, we can't even reboot the thick client in a production scenario, as it does even fail (meaning, the configured failure

Re: Starvation in striped pool

2019-10-09 Thread Ilya Kasnacheev
Hello! It's hard to say what happens here. What timeout settings do you have? Can you provide complete log from client node as well? Regards, -- Ilya Kasnacheev вт, 8 окт. 2019 г. в 19:25, maheshkr76private : > Hello Ilya > Once connection goes bad between client and server, which

Re: Starvation in striped pool

2019-10-08 Thread maheshkr76private
Hello Ilya Once connection goes bad between client and server, which configuration parameter on the thick client-side would force stop the thick client from pinging server... Tried join timeout, connection timeouts, in communicaiton and discovery spis and nothing ever worked. Have seen in a few

Re: Starvation in striped pool

2019-10-08 Thread Ilya Kasnacheev
Hello! If client node continues to respond via Discovery, server node is not going to drop it by unreachability. This is the default behavior. Regards, -- Ilya Kasnacheev вт, 8 окт. 2019 г. в 15:43, maheshkr76private : > OK. There could have been a temporary network issue between the server

Re: Starvation in striped pool

2019-10-08 Thread maheshkr76private
OK. There could have been a temporary network issue between the server and client node. However, I was expecting the server node to throw the client out of the cluster and resume normal functioning. But what bothers me is that the server node never recovered after the network issue and finally

Re: Starvation in striped pool

2019-10-08 Thread Ilya Kasnacheev
e-29268843.zip> > > > > * > [11:16:09,531][WARNING][grid-timeout-worker-#27][G] >>> Possible starvation > in striped pool.* > Thread name: sys-stripe-4-#5 > Queue: [Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, > topicOrd=8, ordered=fal

Starvation in striped pool

2019-10-08 Thread maheshkr76private
][grid-timeout-worker-#27][G] >>> Possible starvation in striped pool.* Thread name: sys-stripe-4-#5 Queue: [Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8, ordered=false, timeout=0, skipOnTimeout=false, msg=GridNearSingleGetRequest [futId=157

Re: When the client frequently has FullGC, it blocks all requests from the server. "Possible starvation in striped pool"

2019-05-23 Thread Ilya Kasnacheev
ing occurred is a large number of "[2019-05-21T16:36:04,880][WARN > ][grid-timeout-worker-#10343][G] >>> Possible starvation in striped pool." > > Please refer to the attachment for the full log, 10.110.118.53 in the log > is the FullGC test node. > > What parameters

Re: ignite zk: Possible starvation in striped pool

2019-01-22 Thread wangsan
Thank you! I see that this is communication spi not discovery spi. But in other nodes there are many zk session timeout message or zk reconnect fail message. And the starvation message only print in the node which have the zk server(not cluster,just three zk node in one machine) in the same

Re: ignite zk: Possible starvation in striped pool

2019-01-22 Thread Denis Mekhanikov
communication SPI, and doesn't have anything to do with ZooKeeper. Denis вт, 22 янв. 2019 г. в 15:38, wangsan : > 10:38:31.577 [grid-timeout-worker-#55%DAEMON-NODE-10-153-106-16-8991%] > WARN > o.a.ignite.internal.util.typedef.G - >>> Possible starvation in striped > po

ignite zk: Possible starvation in striped pool

2019-01-22 Thread wangsan
10:38:31.577 [grid-timeout-worker-#55%DAEMON-NODE-10-153-106-16-8991%] WARN o.a.ignite.internal.util.typedef.G - >>> Possible starvation in striped pool. Thread name: sys-stripe-9-#10%DAEMON-NODE-10-153-106-16-8991% Queue: [] Deadlock: false Completed: 17156 Thread [

Re: Possible starvation in striped pool

2018-07-20 Thread Ilya Kasnacheev
Hello! At this point I recommend debugging which statements are ran on Oracle and why they take long. Also I have noticed: appDataSource - is it behind some kind of connection pool? I am afraid it is possible that this data source is single-threaded in the absense of connection pool, hence you

Re: Possible starvation in striped pool

2018-07-18 Thread Shailendrasinh Gohil
Here you go...

Re: Possible starvation in striped pool

2018-07-18 Thread Ilya Kasnacheev
Hello again! I have just noticed the following stack trace: "flusher-0-#588%AppCluster%" #633 prio=5 os_prio=0 tid=0x7f18d424f800 nid=0xe1bb runnable [0x7f197c1cd000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream.socketRead0(Native Method) at

Re: Possible starvation in striped pool

2018-07-18 Thread Ilya Kasnacheev
Hello! Can you please share the configuration of your Apache Ignite nodes, especially the cache store's of caches. I have just noticed that you're actually waiting on cache store lock. Regards, -- Ilya Kasnacheev 2018-07-17 19:11 GMT+03:00 Shailendrasinh Gohil <

Re: Possible starvation in striped pool

2018-07-17 Thread Shailendrasinh Gohil
We are using the TreeMap for all the putAll operations. We also tried streamer API to create the automatic batches. Still the issue is same. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Possible starvation in striped pool

2018-07-17 Thread Sambhaji Sawant
Hello same issue occurred when trying to put object in cache using cache.put method.after changing put to putAsync issue was solved. I have read about when you using putAll methode pass sorted collection to it so it avoid deadlock. So is it true? On Tue, Jul 17, 2018, 8:22 PM ilya.kasnacheev

Re: Possible starvation in striped pool

2018-07-17 Thread ilya.kasnacheev
Hello! I have noticed that you are using putAll in your code. Apache Ignite is susceptible to deadlocks in the same fashion as regular multi-threaded code: i.e., if you take multiple locks (as putAll does, on partitions for its keys), you can get deadlock unless you maintain sequence of locks,

Re: Possible starvation in striped pool

2018-07-16 Thread Shailendrasinh Gohil
Please find attached thread dump as requested. ServerThreadDump0716.txt -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Possible starvation in striped pool

2018-07-16 Thread Ilya Kasnacheev
Hello! Can you please provide the thread dump of problematic cluster after removal of close statements on caches? Regards, -- Ilya Kasnacheev 2018-07-16 17:21 GMT+03:00 Shailendrasinh Gohil < shailendrasinh.go...@salientcrgt.com>: > Thanks again for the response. > > We have tried removing

Re: Possible starvation in striped pool

2018-07-16 Thread Shailendrasinh Gohil
Thanks again for the response. We have tried removing the close statements but the result was same. And yes, other threads accessing cache from the same Dao. We also tried both the atomicityMode to see if any improvement. We also have write behind enabled for the large tables with frequent get

Re: Possible starvation in striped pool

2018-07-13 Thread Ilya Kasnacheev
Hello! I can see here that you are trying to destroy a cache: at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177) at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140) at

Re: Possible starvation in striped pool

2018-07-12 Thread Shailendrasinh Gohil
Thank you for your response. Please find attached thread dumps for client and server nodes. ClientThreadDump.txt ThreadDumpServer1.txt

Re: Possible starvation in striped pool

2018-07-12 Thread Ilya Kasnacheev
ta from cache. We see the below issue when there are more than 2 > users performing the similar operation on their own data. This was not the > performance we expected from the documentation. > > > > WARN [org.apache.ignite.internal.util.typedef.G] - >>> Possible > starvati

Possible starvation in striped pool

2018-07-12 Thread Gohil, Shailendrasinh (INTL)
their data from cache. We see the below issue when there are more than 2 users performing the similar operation on their own data. This was not the performance we expected from the documentation. WARN [org.apache.ignite.internal.util.typedef.G] - >>> Possible starvation in striped pool.

Re: apache ignite atomicLong.incrementAndGet() is causing starvation in striped pool

2018-07-10 Thread vvasyuk
Hello Slava, Thank you for reply. Will try your solution. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: apache ignite atomicLong.incrementAndGet() is causing starvation in striped pool

2018-07-09 Thread Вячеслав Коптилин
o entries (which client inserts) > I see below outputs only on one node: > Incremented value: 1 > Incremented value: 2 > > And after that I get below warn messages in logs on the node where > "Incremented value" was printed (on the second node I see no messages) : > >

apache ignite atomicLong.incrementAndGet() is causing starvation in striped pool

2018-07-09 Thread Vadym Vasiuk
node where "Incremented value" was printed (on the second node I see no messages) : 2018-07-09 21:56:57.993 WARN 1876 --- [eout-worker-#23] o.apache.ignite.internal.util.typedef.G : >>> Possible starvation in striped pool. Thread name: sys-stripe-0-#1 Queue: [] Deadlock

Re: Possible starvation in striped pool. message

2017-08-30 Thread ezhuravlev
peerClassLoading used only for compute, for example for sharing job classes between nodes, it's not working for objects that put into cache. If you want to work without this classes on nodes, take a look to BinaryObjects: https://apacheignite.readme.io/v2.0/docs/binary-marshaller Evgenii --

Re: Possible starvation in striped pool. message

2017-08-29 Thread kestas
http://apache-ignite-users.70518.x6.nabble.com/Possible-starvation-in-striped-pool-message-tp15993p16482.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Possible starvation in striped pool. message

2017-08-09 Thread Yakov Zhdanov
ache.get(1).getSome(); > > > > -- > View this message in context: http://apache-ignite-users. > 70518.x6.nabble.com/Possible-starvation-in-striped-pool- > message-tp15993p16081.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com. >

Re: Possible starvation in striped pool. message

2017-08-07 Thread kestas
Yes, this seems to appear when we start working with large objects. Is there a way to solve this? Does it affect cache put/get operations performance directly ? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Possible-starvation-in-striped-pool-message

Re: Possible starvation in striped pool. message

2017-08-04 Thread slava.koptilin
()/putAll() methods from different threads. In that case, you need to sort collection of keys before, because batch operation on the same entries in different order could lead to deadlock. Thanks. -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Possible-starvation

Possible starvation in striped pool. message

2017-08-04 Thread kestas
Hi, sometimes we get this message in logs. What does it mean ? Jul 26, 2017 11:43:25 AM org.apache.ignite.logger.java.JavaLogger warning WARNING: >>> Possible starvation in striped pool. Thread name: sys-stripe-3-#4%null% Queue: [] Deadlock: false Completed: 17 Thread [

Re: Possible starvation in striped pool

2017-07-18 Thread Andrey Mashenkov
you use? >> Would you please share full logs? >> >> On Fri, Jul 14, 2017 at 1:24 PM, Alper Tekinalp <al...@evam.com> wrote: >> >>> Hi. >>> >>> What does following log means: >>> >>> [WARN ] 2017-07-12 23:00:50.786 [grid-tim

Re: Possible starvation in striped pool

2017-07-14 Thread Andrey Mashenkov
, Jul 14, 2017 at 1:24 PM, Alper Tekinalp <al...@evam.com> wrote: > Hi. > > What does following log means: > > [WARN ] 2017-07-12 23:00:50.786 [grid-timeout-worker-#71%cache-server%] G > - >>> Possible starvation in striped pool: sys-stripe-10-#11%cache-server% >

Possible starvation in striped pool

2017-07-14 Thread Alper Tekinalp
Hi. What does following log means: [WARN ] 2017-07-12 23:00:50.786 [grid-timeout-worker-#71%cache-server%] G - >>> Possible starvation in striped pool: sys-stripe-10-#11%cache-server% [Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8, ordered=false,