Re: copyOnRead to false

2017-10-22 Thread colinc
Benchmarks I've run on this flag and tests on mutating the entry
(acknowledging that this would be bad practice in production) - agree with
Evgenii's comments. Namely that this attribute has an effect for on-heap
caches (I'm using 2.1). I find it has more of an effect in cases where he
same records are read multiple times - as compared with a single scan type
operation. The effect is about 50% faster than when using off heap storage
only.

Having said this, it is still 100 times slower than a hashmap for local read
operations. For the benefit of my understanding, what is likely to be taking
the time in this case? The mutation test suggests that there is no
marshalling/copying. Is there anything we can configure to improve on this?

Regards,
Colin.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Hammering Cassandra with Ignite seems to get ignite into an infinite loop

2017-10-22 Thread Tobias Eriksson
Hi
 I am testing to see if Ignite v2.1 with Cassandra v3.11 is the way to go for us
I have a AWS cluster of 4 nodes,
2 Nodes with Ignite (node B and C)
1 Node with Cassandra (node B)
4 Nodes which act as Client-Ignite nodes updating/inserting values into a 
simple cache key,value (serialized java class) (node A,B,C and D)

As I was adding client-apps 1,2,3 and eventually 4 clients I noticed the 
exception below for Ignite node on ( C )
It seems to have run into a bad loop, that it does not get out of
Is this a known bug ?
-Tobias

Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=ee941d86, name=null, uptime=00:12:00:054]
^-- H/N/C [hosts=4, nodes=5, CPUs=64]
^-- CPU [cur=12.33%, avg=11.77%, GC=0.03%]
^-- PageMemory [pages=2518090]
^-- Heap [used=365MB, free=63.92%, comm=1013MB]
^-- Non heap [used=77MB, free=94.9%, comm=79MB]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=6, qSize=0]
^-- Outbound messages queue [size=0]
2017-10-20 16:57:22 INFO  IgniteKernal:475 - FreeList [name=null, buckets=256, 
dataPages=1666280, reusePages=817]
2017-10-20 16:57:23 INFO  GridDiscoveryManager:475 - Node left topology: 
TcpDiscoveryNode [id=77067a17-7be8-4382-9741-fbfff82a7d84, 
addrs=[0:0:0:0:0:0:0:1%lo, 10.150.4.223, 127.0.0.1, 172.17.0.1, 172.18.0.1, 
172.19.0.1, 172.20.0.1, 172.21.0.1], sockAddrs=[/172.17.0.1:0, 
/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, 
ip-172-19-0-1.eu-west-1.compute.internal/172.19.0.1:0, 
ip-172-18-0-1.eu-west-1.compute.internal/172.18.0.1:0, 
ip-172-21-0-1.eu-west-1.compute.internal/172.21.0.1:0, 
ip-172-20-0-1.eu-west-1.compute.internal/172.20.0.1:0, /10.150.4.223:0], 
discPort=0, order=5, intOrder=5, lastExchangeTime=1508507191082, loc=false, 
ver=2.1.0#20170720-sha1:a6ca5c8a, isClient=true]
2017-10-20 16:57:23 INFO  GridDiscoveryManager:475 - Topology snapshot [ver=8, 
servers=2, clients=2, CPUs=64, heap=15.0GB]
2017-10-20 16:57:23 INFO  time:475 - Started exchange init 
[topVer=AffinityTopologyVersion [topVer=8, minorTopVer=0], crd=false, evt=11, 
node=TcpDiscoveryNode [id=ee941d86-d0ae-4b6d-a484-2f249c2caa62, 
addrs=[0:0:0:0:0:0:0:1%lo, 10.150.4.224, 127.0.0.1], 
sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500, 
ip-10-150-4-224.eu-west-1.compute.internal/10.150.4.224:47500], discPort=47500, 
order=2, intOrder=2, lastExchangeTime=1508507843554, loc=true, 
ver=2.1.0#20170720-sha1:a6ca5c8a, isClient=false], evtNode=TcpDiscoveryNode 
[id=ee941d86-d0ae-4b6d-a484-2f249c2caa62, addrs=[0:0:0:0:0:0:0:1%lo, 
10.150.4.224, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, 
/127.0.0.1:47500, 
ip-10-150-4-224.eu-west-1.compute.internal/10.150.4.224:47500], discPort=47500, 
order=2, intOrder=2, lastExchangeTime=1508507843554, loc=true, 
ver=2.1.0#20170720-sha1:a6ca5c8a, isClient=false], customEvt=null]
2017-10-20 16:57:23 INFO  GridDhtPartitionsExchangeFuture:475 - Snapshot 
initialization completed [topVer=AffinityTopologyVersion [topVer=8, 
minorTopVer=0], time=0ms]
2017-10-20 16:57:23 INFO  time:475 - Finished exchange init 
[topVer=AffinityTopologyVersion [topVer=8, minorTopVer=0], crd=false]
2017-10-20 16:57:23 INFO  GridCachePartitionExchangeManager:475 - Skipping 
rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=8, 
minorTopVer=0], evt=NODE_LEFT, node=77067a17-7be8-4382-9741-fbfff82a7d84]
2017-10-20 16:57:25 INFO  ClockFactory:52 - Using native clock to generate 
timestamps.
2017-10-20 16:57:25 INFO  DCAwareRoundRobinPolicy:95 - Using data-center name 
'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide 
the correct datacenter name with DCAwareRoundRobinPolicy constructor)
2017-10-20 16:57:25 INFO  Cluster:1568 - New Cassandra host /10.150.4.223:9042 
added
2017-10-20 16:57:25 WARN  CassandraCacheStore:485 - Prepared statement cluster 
error detected, refreshing Cassandra session
com.datastax.driver.core.exceptions.InvalidQueryException: Tried to execute 
unknown prepared query : 0x76fccec193ce5098fb3094cfdb082930. You may have used 
a PreparedStatement that was created with another Cluster instance.
at 
com.datastax.driver.core.SessionManager.makeRequestMessage(SessionManager.java:578)
at 
com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:131)
at 
org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:231)
at 
org.apache.ignite.cache.store.cassandra.CassandraCacheStore.writeAll(CassandraCacheStore.java:354)
at 
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.putAll(GridCacheStoreManagerAdapter.java:625)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updatePartialBatch(GridDhtAtomicCache.java:2643)
at 

Cannot find schema for object with compact footer

2017-10-22 Thread zshamrock
>From time to time we get the following integration test failure (using mvn
test command). Sometimes it passes without any problems, sometime it fails,
and not necessary the same test. It never fails when running the test
standalone, i.e. from IDEA for example.

Here is the failure:

Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.418 sec
<<< FAILURE! - in SessionLineupsCacheITSpec
verify get on the existing session id returns the corresponding session
lineups(SessionLineupsCacheITSpec)  Time elapsed: 0.142 sec  <<< ERROR!
javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException:
Cannot find schema for object with compact footer [typeId=709293316,
schemaId=96980434]
at SessionLineupsCacheITSpec.verify get on the existing session id 
returns
the corresponding session lineups(SessionLineupsCacheITSpec.groovy:48)
Caused by: org.apache.ignite.IgniteCheckedException: Cannot find schema for
object with compact footer [typeId=709293316, schemaId=96980434]
at SessionLineupsCacheITSpec.verify get on the existing session id 
returns
the corresponding session lineups(SessionLineupsCacheITSpec.groovy:48)
Caused by: org.apache.ignite.binary.BinaryObjectException: Cannot find
schema for object with compact footer [typeId=709293316, schemaId=96980434]

What does it mean "Cannot find schema for object with compact footer", and
what could be the cause of this in-consistence behavior, i.e. why it passes
sometimes, and sometimes fails?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to query to local partition cache data

2017-10-22 Thread dark
Result of the document

setLocal is a local query
setCollocated specifies that the data associated with Cache's Join function
is local

It seems like this.
But why does the Partitioned Cache increase the GridQueryExecutor's Task on
all nodes?

Thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to query to local partition cache data

2017-10-22 Thread dark
In addition, I am confused about exactly what each of the following setting
values that can be set in SqlFieldsQuery means.

setLocal
setCollocated

Can you explain the meaning of these by taking an example?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to query to local partition cache data

2017-10-22 Thread dark
How can I query only the data held by ignite data nodes? 
I've used it like this, but I've confirmed that the CompletedTaskCount of
the GridQueryExecutor of other nodes goes up.

ignite.compute().affinityRun(cacheName, 1, () -> {
Ignite ignite = Ignition.ignite();
IgniteCache cache = ignite.cache(cacheName);

SqlFieldsQuery query = new 
SqlFieldsQuery("select path, sum(count) from
CacheTable group by path").setLocal(true);

long queryStartTimeMillis = 
System.currentTimeMillis();

List all = cache.query(query).getAll();
});

I think that when I query like above, Compute Task is delivered to the node
that is in charge of 1 partition key and queries about the partitions that
it is responsible for. However, the CompletedTaskCount of the
GridQueryExecutor on all nodes has been increased. :(

ignite.compute().affinityRun(cacheName, 1, () -> {
Ignite ignite = Ignition.ignite();
IgniteCache cache = ignite.cache(cacheName);

SqlFieldsQuery query = new 
SqlFieldsQuery("select path, sum(count) from
CacheTable group by path").setPartitions(1);

long queryStartTimeMillis = 
System.currentTimeMillis();

List all = cache.query(query).getAll();
});

when I query like above, Only the node in charge of the partition has
confirmed that the CompletedTaskCount of the GridQueryExecutor increases.

How can I force the Data node to query with its own data?






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/