Re: Cache Structure

2018-11-13 Thread Andrey Mashenkov
Hi,

Yes, to add a new item to a list value of second cache Ignite will have to
deserialize whole list (with all it's items) then add new item and then
serialize list again.
You can try to use BinaryObjects to avoid unnecessary deserialization of
list items [1].
Also note, k2 and k1 data will have different distribution and adding new
instance of T will requires 2 operations (as you mentioned) on different
nodes.


If k2 <- k1 relation is one-to-many, there is another way to achieve the
same with using SQL [2].
With this approach adding new instance will be a single operation on one
node and Ignite will need just to update local index in addition, but query
for k2 will be a broadcast unless the data is collocated [3].


[1]
https://apacheignite.readme.io/docs/binary-marshaller#section-binaryobject-cache-api
[2]
https://apacheignite-sql.readme.io/docs/java-sql-api#section-sqlfieldsqueries
[3] https://apacheignite.readme.io/docs/affinity-collocation

On Tue, Nov 13, 2018 at 11:34 PM Ramin Farajollah (BLOOMBERG/ 731 LEX) <
rfarajol...@bloomberg.net> wrote:

> Hi,
>
> Please help me structure the cache to store instances of a type, say T.
> I'd like to cache the objects in two different ways:
>
> 1. By a unique key (k1), where the value is a single instance
> 2. By a non-unique key (k2), where the value is a list of instances
>
> Please comment on my approach:
>
> - Create a cache with k1 to an instance of T.
> - Create a second cache with k2 to a list of k1 keys.
> - To add a new instance of T, I will have to update both caches. Will this
> result in serializing the instance (in the first cache) and the list of
> keys (in the send cache)? -- Assume not on-heap.
> - If so, will each addition/deletion re-serialize the entire list in the
> second cache?
>
> Thank you!
> Ramin
>
>

-- 
Best regards,
Andrey V. Mashenkov


Re: How does Ignite provides load balancing?

2018-11-13 Thread Alejandro Santos
Dear Ilya,
This is exacly what I was looking for!
I can't find much documentation about it in ignite docs, only posts in
mailing list.
Is there any paper that studies this for Ignite?
Thanks
Alejandro


On Tue, Nov 13, 2018 at 5:34 PM Ilya Kasnacheev
 wrote:
>
> Hello!
>
> It will use Rendezvous hashing of keys:
> https://en.wikipedia.org/wiki/Rendezvous_hashing
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 9 нояб. 2018 г. в 16:24, Alejandro Santos :
>>
>> Dear Denis,
>>
>> How does Ignite provides load balancing between nodes?
>>
>> Is it hash-based?
>> Is it dynamically allocated?
>> Is it something else?
>>
>> Thanks,
>>
>> On Thu, Nov 8, 2018 at 1:14 AM Denis Magda  wrote:
>> >
>> > Hi,
>> >
>> > In general, the load is balanced because the data is distributed evenly 
>> > across a cluster of machines. For instance, if you utilize key-value calls 
>> > then each request goes to a specific node. If you're on SQL then a query 
>> > might be broadcasted or sent to a specific node as well.
>> >
>> > Overall, yes, Ignite is the right solution if you need to scale and 
>> > accelerate performance.
>> >
>> > --
>> > Denis
>> >
>> > On Tue, Nov 6, 2018 at 6:55 AM Alejandro Santos  wrote:
>> >>
>> >> Hi all,
>> >>
>> >> I've been reading the Ignite documentation and have some technical
>> >> questions. I need to evaluate massive storage systems for some
>> >> specific application and I would like to understand how ignite works.
>> >>
>> >> My application needs a buffering space that write arbitrary values,
>> >> but then reads on average half of the values at most once. This is a
>> >> random process, and we can't really predict which keys will be read.
>> >>
>> >> Is ignite the right tool for this application? Do you need more 
>> >> information?
>> >>
>> >> Thank you,
>> >>
>> >> --
>> >> Alejandro Santos
>>
>>
>>
>> --
>> Alejandro Santos



-- 
Alejandro Santos


Cache Structure

2018-11-13 Thread Ramin Farajollah (BLOOMBERG/ 731 LEX)
Hi,

Please help me structure the cache to store instances of a type, say T.
I'd like to cache the objects in two different ways:

1. By a unique key (k1), where the value is a single instance
2. By a non-unique key (k2), where the value is a list of instances

Please comment on my approach:

- Create a cache with k1 to an instance of T.
- Create a second cache with k2 to a list of k1 keys.
- To add a new instance of T, I will have to update both caches. Will this 
result in serializing the instance (in the first cache) and the list of keys 
(in the send cache)? -- Assume not on-heap.
- If so, will each addition/deletion re-serialize the entire list in the second 
cache?

Thank you!
Ramin



Re: Does Apache Ignite support java 8 and hibernate 5.2 NOW?

2018-11-13 Thread Ilya Kasnacheev
Hello!

There is a patch available at
https://issues.apache.org/jira/browse/IGNITE-9893
You could probably apply it and build your own integration module.

Regards,
-- 
Ilya Kasnacheev


ср, 7 нояб. 2018 г. в 20:42, daya airody :

> Hibernate_5.2.x expects different Interface from Ignite for hibernate
> entity
> cache and second level cache.
>
> ignite-hibernate_5.1 library only supports the interface required by
> hibernate 5.1. When is ignite going to support Hibernate 5.2.x.
>
> On spring boot 2.0.1.RELEASE, only java method caching is working. This
> issue is a blocker for all applications using Spring boot 2.0.1 and above.
>
> Has anybody worked on supporting Hibernate 5.2.x ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to add a new cache type to a existing cache?

2018-11-13 Thread Ilya Kasnacheev
Hello!

In Apache Ignite, once cache is created most of its configuration settings
cannot be changed.

There is slight deviation from this rule when we consider ALTER TABLE or
CREATE INDEX command, but I don't think you can add new indexed types.

Regards,
-- 
Ilya Kasnacheev


пт, 9 нояб. 2018 г. в 13:00, kcheng.mvp :

>
> my cache is created via (cache name is 'abc')
> ==
> igniteSpringBean.getOrCreateCache(cfg)
> ==
>
> and the indexedType is [Long.class, Person.class, Long.class,
> Student.class]
>
>
> keep the server node running, and issue command  from `sqlline.sh`
>
> 
> CREATE TABLE IF NOT EXISTS Person (
>   id int,
>   city_id int,
>   name varchar,
>   age int,
>   company varchar,
>   PRIMARY KEY (id, city_id)
> ) WITH "template=partitioned,backups=1,cache_name=abc, key_type=PersonKey,
> value_type=MyPerson";
> 
>
> after the command executed, I can not see  the new table 'PERSON' in schema
> 'abc',
>
> if I remove `cache_name=abc ` then I can see a new table `PERSON` is
> created
> in `PUBLIC` schema.
>
>
> is there any way to add new new key/value cache to a existing cache?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How does Ignite provides load balancing?

2018-11-13 Thread Ilya Kasnacheev
Hello!

It will use Rendezvous hashing of keys:
https://en.wikipedia.org/wiki/Rendezvous_hashing

Regards,
-- 
Ilya Kasnacheev


пт, 9 нояб. 2018 г. в 16:24, Alejandro Santos :

> Dear Denis,
>
> How does Ignite provides load balancing between nodes?
>
> Is it hash-based?
> Is it dynamically allocated?
> Is it something else?
>
> Thanks,
>
> On Thu, Nov 8, 2018 at 1:14 AM Denis Magda  wrote:
> >
> > Hi,
> >
> > In general, the load is balanced because the data is distributed evenly
> across a cluster of machines. For instance, if you utilize key-value calls
> then each request goes to a specific node. If you're on SQL then a query
> might be broadcasted or sent to a specific node as well.
> >
> > Overall, yes, Ignite is the right solution if you need to scale and
> accelerate performance.
> >
> > --
> > Denis
> >
> > On Tue, Nov 6, 2018 at 6:55 AM Alejandro Santos 
> wrote:
> >>
> >> Hi all,
> >>
> >> I've been reading the Ignite documentation and have some technical
> >> questions. I need to evaluate massive storage systems for some
> >> specific application and I would like to understand how ignite works.
> >>
> >> My application needs a buffering space that write arbitrary values,
> >> but then reads on average half of the values at most once. This is a
> >> random process, and we can't really predict which keys will be read.
> >>
> >> Is ignite the right tool for this application? Do you need more
> information?
> >>
> >> Thank you,
> >>
> >> --
> >> Alejandro Santos
>
>
>
> --
> Alejandro Santos
>


Re: Ways to improve re-balancing of partitions and how to monitor re-balance progress

2018-11-13 Thread Naveen
Has any one got anything to say on this.

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error in write-through

2018-11-13 Thread Ilya Kasnacheev
Hello!

Striped pool works like traditional thread pool but it is faster, a job can
only go to one stripe always.

putAll will not be broken down but executed in one stripe (first key from
the batch, I guess).

I guess so.
-- 
Ilya Kasnacheev


вт, 13 нояб. 2018 г. в 16:31, Prasad Bhalerao :

> How does striped pool work exactly? I read the doc
>  but
> still have some confusion.
>
> Does ignite break putAll cache operation into small chunks/tasks and then
> submit it to threads in striped pool to do it concurrently?
> Is this the only purpose of striped pool?
>
> Does ignite use IgniteStripedThreadPoolExecutor for this purpose?
>
> Thanks,
> Prasad
>
> On Tue, Nov 13, 2018 at 6:49 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> As you can see from this thread dump, Oracle driver is waiting on a
>> socket, probably for a query response.
>> You should probably take a look at hanging queries from Oracle side.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> вт, 13 нояб. 2018 г. в 9:21, Akash Shinde :
>>
>>> Hi,
>>> I have started four ignite nodes and configured cache in distributed
>>> mode. When I initiated thousands of requests to write the data on this
>>> cache(write through enabled) , facing below error.
>>> From logs we can see this error is occurring  while witting to oracle
>>> database.(using cache write-through).
>>> This error is not consistent.Node does stop for while after this error
>>> and continues to pick up the next ignite tasks.
>>> Please someone advise what does following log means.
>>>
>>>
>>> 2018-11-13 05:52:05,577 2377545 [core-1] INFO
>>> c.q.a.a.s.AssetManagementService - Add asset request processing started,
>>> requestId ADD_Ip_483, subscriptionId =262604, userId=547159
>>> 2018-11-13 05:52:06,647 2378615
>>> [grid-timeout-worker-#39%springDataNode%] WARN
>>> o.a.ignite.internal.util.typedef.G - >>> Possible starvation in striped
>>> pool.
>>> Thread name: sys-stripe-11-#12%springDataNode%
>>> Queue: [Message closure [msg=GridIoMessage [plc=2,
>>> topic=TOPIC_CACHE, topicOrd=8, ordered=false, timeout=0,
>>> skipOnTimeout=false, msg=GridNearSingleGetResponse [futId=1542085977929,
>>> res=BinaryObjectImpl [arr= true, ctx=false, start=0], topVer=null,
>>> err=null, flags=0]]], Message closure [msg=GridIoMessage [plc=2,
>>> topic=TOPIC_CACHE, topicOrd=8, ordered=false, timeout=0,
>>> skipOnTimeout=false, msg=GridDhtTxPrepareResponse [nearEvicted=null,
>>> futId=b67ac7b0761-93ebea72-bf4e-40d8-8a19-d3258be94ce9, miniId=1,
>>> super=GridDistributedTxPrepareResponse [txState=null, part=-1, err=null,
>>> super=GridDistributedBaseMessage [ver=GridCacheVersion [topVer=153565953,
>>> order=1542089536997, nodeOrder=3], committedVers=null, rolledbackVers=null,
>>> cnt=0, super=GridCacheIdMessage [cacheId=0]], Message closure
>>> [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8, ordered=false,
>>> timeout=0, skipOnTimeout=false, msg=GridNearSingleGetRequest
>>> [futId=1542085976178, key=BinaryObjectImpl [arr= true, ctx=false, start=0],
>>> flags=1, topVer=AffinityTopologyVersion [topVer=7, minorTopVer=0],
>>> subjId=9e8db7e7-48ba-4161-881b-ad4fcfc175a0, taskNameHash=0, createTtl=-1,
>>> accessTtl=-1
>>> Deadlock: false
>>> Completed: 703
>>> Thread [name="sys-stripe-11-#12%springDataNode%", id=41, state=RUNNABLE,
>>> blockCnt=37, waitCnt=729]
>>> at java.net.SocketInputStream.socketRead0(Native Method)
>>> at
>>> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>>> at java.net.SocketInputStream.read(SocketInputStream.java:171)
>>> at java.net.SocketInputStream.read(SocketInputStream.java:141)
>>> at oracle.net.ns.Packet.receive(Packet.java:311)
>>> at oracle.net.ns.DataPacket.receive(DataPacket.java:105)
>>> at
>>> oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:305)
>>> at oracle.net.ns.NetInputStream.read(NetInputStream.java:249)
>>> at oracle.net.ns.NetInputStream.read(NetInputStream.java:171)
>>> at oracle.net.ns.NetInputStream.read(NetInputStream.java:89)
>>> at
>>> oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:123)
>>> at
>>> oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:79)
>>> at
>>> oracle.jdbc.driver.T4CMAREngineStream.unmarshalUB1(T4CMAREngineStream.java:429)
>>> at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:397)
>>> at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
>>> at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
>>> at
>>> oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
>>> at
>>> oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
>>> at
>>> oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:943)

Re: Error in write-through

2018-11-13 Thread Prasad Bhalerao
How does striped pool work exactly? I read the doc
 but
still have some confusion.

Does ignite break putAll cache operation into small chunks/tasks and then
submit it to threads in striped pool to do it concurrently?
Is this the only purpose of striped pool?

Does ignite use IgniteStripedThreadPoolExecutor for this purpose?

Thanks,
Prasad

On Tue, Nov 13, 2018 at 6:49 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> As you can see from this thread dump, Oracle driver is waiting on a
> socket, probably for a query response.
> You should probably take a look at hanging queries from Oracle side.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 13 нояб. 2018 г. в 9:21, Akash Shinde :
>
>> Hi,
>> I have started four ignite nodes and configured cache in distributed
>> mode. When I initiated thousands of requests to write the data on this
>> cache(write through enabled) , facing below error.
>> From logs we can see this error is occurring  while witting to oracle
>> database.(using cache write-through).
>> This error is not consistent.Node does stop for while after this error
>> and continues to pick up the next ignite tasks.
>> Please someone advise what does following log means.
>>
>>
>> 2018-11-13 05:52:05,577 2377545 [core-1] INFO
>> c.q.a.a.s.AssetManagementService - Add asset request processing started,
>> requestId ADD_Ip_483, subscriptionId =262604, userId=547159
>> 2018-11-13 05:52:06,647 2378615 [grid-timeout-worker-#39%springDataNode%]
>> WARN  o.a.ignite.internal.util.typedef.G - >>> Possible starvation in
>> striped pool.
>> Thread name: sys-stripe-11-#12%springDataNode%
>> Queue: [Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE,
>> topicOrd=8, ordered=false, timeout=0, skipOnTimeout=false,
>> msg=GridNearSingleGetResponse [futId=1542085977929, res=BinaryObjectImpl
>> [arr= true, ctx=false, start=0], topVer=null, err=null, flags=0]]], Message
>> closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8,
>> ordered=false, timeout=0, skipOnTimeout=false, msg=GridDhtTxPrepareResponse
>> [nearEvicted=null, futId=b67ac7b0761-93ebea72-bf4e-40d8-8a19-d3258be94ce9,
>> miniId=1, super=GridDistributedTxPrepareResponse [txState=null, part=-1,
>> err=null, super=GridDistributedBaseMessage [ver=GridCacheVersion
>> [topVer=153565953, order=1542089536997, nodeOrder=3], committedVers=null,
>> rolledbackVers=null, cnt=0, super=GridCacheIdMessage [cacheId=0]],
>> Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8,
>> ordered=false, timeout=0, skipOnTimeout=false, msg=GridNearSingleGetRequest
>> [futId=1542085976178, key=BinaryObjectImpl [arr= true, ctx=false, start=0],
>> flags=1, topVer=AffinityTopologyVersion [topVer=7, minorTopVer=0],
>> subjId=9e8db7e7-48ba-4161-881b-ad4fcfc175a0, taskNameHash=0, createTtl=-1,
>> accessTtl=-1
>> Deadlock: false
>> Completed: 703
>> Thread [name="sys-stripe-11-#12%springDataNode%", id=41, state=RUNNABLE,
>> blockCnt=37, waitCnt=729]
>> at java.net.SocketInputStream.socketRead0(Native Method)
>> at
>> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>> at java.net.SocketInputStream.read(SocketInputStream.java:171)
>> at java.net.SocketInputStream.read(SocketInputStream.java:141)
>> at oracle.net.ns.Packet.receive(Packet.java:311)
>> at oracle.net.ns.DataPacket.receive(DataPacket.java:105)
>> at
>> oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:305)
>> at oracle.net.ns.NetInputStream.read(NetInputStream.java:249)
>> at oracle.net.ns.NetInputStream.read(NetInputStream.java:171)
>> at oracle.net.ns.NetInputStream.read(NetInputStream.java:89)
>> at
>> oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:123)
>> at
>> oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:79)
>> at
>> oracle.jdbc.driver.T4CMAREngineStream.unmarshalUB1(T4CMAREngineStream.java:429)
>> at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:397)
>> at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
>> at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
>> at
>> oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
>> at
>> oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
>> at
>> oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:943)
>> at
>> oracle.jdbc.driver.OraclePreparedStatement.executeForRowsWithTimeout(OraclePreparedStatement.java:12029)
>> at
>> oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:12140)
>> at
>> oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:246)
>> at
>> 

Re: Query execution too long even after providing index

2018-11-13 Thread Prasad Bhalerao
Hi Evgenii,

Thank you for suggesting the query optimization. It worked perfectly fine.
I unnecessarily complicated the sql.
I really appreciate the efforts you guys are taking to help out the users.

About the test data: Yes in production I will be having more than 100K
records for single subscription and moduleId.
The test data generated has 5 million entries against subscriptionId and
moduleId. This is a worst case scenario but we do have such cases in
production.

Thanks,
Prasad

On Mon, Oct 22, 2018 at 6:22 PM Evgenii Zhuravlev 
wrote:

> Well, looks like you trying to find the segments that intersects defined
> segment, but you're complicating it. You don't need 3 conditions here - it
> will be enough to have only one - ipStart <= MAX and ipEnd >= MIN. I've
> checked it for your case and got absolutely the same results as you have
> with a too complex query.
>
> Aditionally, you have the same subscriptionId AND moduleId  in your test
> data, which means that you will have a bad selectivity when you will filter
> by these fields at first. Do you really have such data in your production?
>
> Also, when you will measure something again - do operation a lot of times
> - that how benchmarks work. Ignite initialize internal structures at first
> executions, so, it will not give an idea of overall latency if you will
> measure it just once.
>
> Best Regards,
> Evgenii
>
> пн, 22 окт. 2018 г. в 6:56, Prasad Bhalerao  >:
>
>> Hi Evgenii,
>>
>> Did you get time to check the reproducer?
>>
>> Can you please suggest solution for this?
>>
>>
>> Thanks,
>> Prasad
>>
>>
>> On Fri, Oct 19, 2018, 4:46 PM Prasad Bhalerao <
>> prasadbhalerao1...@gmail.com> wrote:
>>
>>> Hi Evgenii,
>>>
>>> I have created a reproducer and uploaded it to GitHub. I have created 5
>>> cases to test the sql execution time.
>>>
>>> GitHub project: https://github.com/prasadbhalerao1983/IgniteTestPrj.git
>>>
>>> Please run IgniteQueryTester class.
>>>
>>> Thanks,
>>> Prasad
>>>
>>> On Wed, Oct 17, 2018 at 7:46 PM ezhuravlev 
>>> wrote:
>>>
 How much data do you have?  What is the amount of heap and offheap
 memory?
 Can you share the reproducer with the community?

 Evgenii



 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

>>>


Re: Error in write-through

2018-11-13 Thread Ilya Kasnacheev
Hello!

As you can see from this thread dump, Oracle driver is waiting on a socket,
probably for a query response.
You should probably take a look at hanging queries from Oracle side.

Regards,
-- 
Ilya Kasnacheev


вт, 13 нояб. 2018 г. в 9:21, Akash Shinde :

> Hi,
> I have started four ignite nodes and configured cache in distributed mode.
> When I initiated thousands of requests to write the data on this
> cache(write through enabled) , facing below error.
> From logs we can see this error is occurring  while witting to oracle
> database.(using cache write-through).
> This error is not consistent.Node does stop for while after this error and
> continues to pick up the next ignite tasks.
> Please someone advise what does following log means.
>
>
> 2018-11-13 05:52:05,577 2377545 [core-1] INFO
> c.q.a.a.s.AssetManagementService - Add asset request processing started,
> requestId ADD_Ip_483, subscriptionId =262604, userId=547159
> 2018-11-13 05:52:06,647 2378615 [grid-timeout-worker-#39%springDataNode%]
> WARN  o.a.ignite.internal.util.typedef.G - >>> Possible starvation in
> striped pool.
> Thread name: sys-stripe-11-#12%springDataNode%
> Queue: [Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE,
> topicOrd=8, ordered=false, timeout=0, skipOnTimeout=false,
> msg=GridNearSingleGetResponse [futId=1542085977929, res=BinaryObjectImpl
> [arr= true, ctx=false, start=0], topVer=null, err=null, flags=0]]], Message
> closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8,
> ordered=false, timeout=0, skipOnTimeout=false, msg=GridDhtTxPrepareResponse
> [nearEvicted=null, futId=b67ac7b0761-93ebea72-bf4e-40d8-8a19-d3258be94ce9,
> miniId=1, super=GridDistributedTxPrepareResponse [txState=null, part=-1,
> err=null, super=GridDistributedBaseMessage [ver=GridCacheVersion
> [topVer=153565953, order=1542089536997, nodeOrder=3], committedVers=null,
> rolledbackVers=null, cnt=0, super=GridCacheIdMessage [cacheId=0]],
> Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8,
> ordered=false, timeout=0, skipOnTimeout=false, msg=GridNearSingleGetRequest
> [futId=1542085976178, key=BinaryObjectImpl [arr= true, ctx=false, start=0],
> flags=1, topVer=AffinityTopologyVersion [topVer=7, minorTopVer=0],
> subjId=9e8db7e7-48ba-4161-881b-ad4fcfc175a0, taskNameHash=0, createTtl=-1,
> accessTtl=-1
> Deadlock: false
> Completed: 703
> Thread [name="sys-stripe-11-#12%springDataNode%", id=41, state=RUNNABLE,
> blockCnt=37, waitCnt=729]
> at java.net.SocketInputStream.socketRead0(Native Method)
> at
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> at java.net.SocketInputStream.read(SocketInputStream.java:171)
> at java.net.SocketInputStream.read(SocketInputStream.java:141)
> at oracle.net.ns.Packet.receive(Packet.java:311)
> at oracle.net.ns.DataPacket.receive(DataPacket.java:105)
> at
> oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:305)
> at oracle.net.ns.NetInputStream.read(NetInputStream.java:249)
> at oracle.net.ns.NetInputStream.read(NetInputStream.java:171)
> at oracle.net.ns.NetInputStream.read(NetInputStream.java:89)
> at
> oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:123)
> at
> oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:79)
> at
> oracle.jdbc.driver.T4CMAREngineStream.unmarshalUB1(T4CMAREngineStream.java:429)
> at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:397)
> at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
> at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
> at
> oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
> at
> oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
> at
> oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:943)
> at
> oracle.jdbc.driver.OraclePreparedStatement.executeForRowsWithTimeout(OraclePreparedStatement.java:12029)
> at
> oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:12140)
> at
> oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:246)
> at
> com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:128)
> at
> com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
> at
> com.qualys.agms.grid.cache.loader.AbstractDefaultCacheStore.writeAll(AbstractDefaultCacheStore.java:126)
> at
> o.a.i.i.processors.cache.store.GridCacheStoreManagerAdapter.putAll(GridCacheStoreManagerAdapter.java:641)
> at
> o.a.i.i.processors.cache.transactions.IgniteTxAdapter.batchStoreCommit(IgniteTxAdapter.java:1422)
> at
>