Tolga, this looks like you do cache.get() and key resides on remote node.
So, yes, local node waits for response from remote node.

--Yakov

2017-02-21 10:23 GMT+03:00 Tolga Kavukcu <[email protected]>:

> Hi Val,Everyone
>
> I am able to overcome with write behind issue and can process exteremly
> fast in single node. But when i switched to multinode with partitioned
> mode. My threads waiting at some condition. There are 16 threads processing
> data all waits at same trace. Adding the thread dump.
>
>  java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0000000711093898> (a org.apache.ignite.internal.
> processors.cache.distributed.dht.GridPartitionedSingleGetFuture)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> get0(GridFutureAdapter.java:161)
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> get(GridFutureAdapter.java:119)
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:487)
> at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:4629)
> at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:1386)
> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(
> IgniteCacheProxy.java:1118)
> at com.intellica.evam.engine.cache.dao.ScenarioCacheDao.
> getCurrentScenarioRecord(ScenarioCacheDao.java:35)
>
> What might be the reason of the problem. Does it waits for a response from
> other node ?
>
> -Regards.
>
> On Fri, Feb 10, 2017 at 7:31 AM, Tolga Kavukcu <[email protected]>
> wrote:
>
>> Hi Val,
>>
>> Thanks for your tip, with enough memory i believe write-behind queue can
>> handle peak times.
>>
>> Thanks.
>>
>> Regards.
>>
>> On Thu, Feb 9, 2017 at 10:44 PM, vkulichenko <
>> [email protected]> wrote:
>>
>>> Hi Tolga,
>>>
>>> There is a back-pressure mechanism to ensure that node doesn't run out of
>>> memory because of too long write behind queue. You can try increasing
>>> writeBehindFlushSize property to relax it.
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-ignite-users.705
>>> 18.x6.nabble.com/Cache-write-behind-optimization-tp10527p10531.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>>
>> --
>>
>> *Tolga KAVUKÇU*
>>
>
>
>
> --
>
> *Tolga KAVUKÇU*
>

Reply via email to