NullPointerException: Ouch! Argument cannot be null: key while performing cache.getAll

2019-01-09 Thread kotamrajuyashasvi
Hi

I'm working on a project with ignite as in memory cache with Cassandra as
persistence for ignite.
I need to perform cache.getAll(..) on a set of pojo cache keys built. For
Random runs facing the below Exception.

Failed to acquire lock for request: GridNearLockRequest
[topVer=AffinityTopologyVersion [topVer=6, minorTopVer=1], miniId=1,
dhtVers=[null, null, null, null, null, null, null, null, null, null, null,
null, null, null, null, null, null, null, null, null, null, null, null,
null, null, null, null, null, null, null, null, null, null, null, null,
null, null, null, null, null, null, null, null, null, null, null, null,
null, null, null, null, null, null, null, null, null, null, null, null,
null, null, null, null, null, null, null, null, null, null, null, null,
null, null, null, null, null, null, null, null, null, null, null, null,
null, null, null, null, null, null, null, null, null, null, null, null,
null, null, null, null, null], subjId=98637eda-1931-441f-a0b8-875162969ac0,
taskNameHash=0, createTtl=-1, accessTtl=-1, flags=5, filter=null,
super=GridDistributedLockRequest
[nodeId=98637eda-1931-441f-a0b8-875162969ac0, nearXidVer=GridCacheVersion
[topVer=158492748, order=1547015993291, nodeOrder=2], threadId=155,
futId=569a5213861-cbfbf917-fcc5-410e-aaba-aea33f2f2f35, timeout=50,
isInTx=true, isInvalidate=false, isRead=true, isolation=REPEATABLE_READ,
retVals=[true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true], txSize=0, flags=0, keysCnt=100,
super=GridDistributedBaseMessage [ver=GridCacheVersion [topVer=158492748,
order=1547015993291, nodeOrder=2], committedVers=null, rolledbackVers=null,
cnt=0, super=GridCacheIdMessage [cacheId=-379566268
class org.apache.ignite.IgniteCheckedException:
java.lang.NullPointerException: Ouch! Argument cannot be null: key
at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAllFromStore(GridCacheStoreManagerAdapter.java:498)
at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAll(GridCacheStoreManagerAdapter.java:400)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.loadMissingFromStore(GridDhtLockFuture.java:1054)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.onComplete(GridDhtLockFuture.java:731)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.onDone(GridDhtLockFuture.java:703)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.onDone(GridDhtLockFuture.java:82)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:451)
at
org.apache.ignite.internal.util.future.GridCompoundFuture.checkComplete(GridCompoundFuture.java:285)
at
org.apache.ignite.internal.util.future.GridCompoundFuture.markInitialized(GridCompoundFuture.java:276)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.map(GridDhtLockFuture.java:966)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.onOwnerChanged(GridDhtLockFuture.java:655)
at
org.apache.ignite.internal.processors.cache.GridCacheMvccManager.notifyOwnerChanged(GridCacheMvccManager.java:226)
at
org.apache.ignite.internal.processors.cache.GridCacheMvccManager.access$200(GridCacheMvccManager.java:80)
at
org.apache.ignite.internal.processors.cache.GridCacheMvccManager$3.onOwnerChanged(GridCacheMvccManager.java:163)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.checkOwnerChanged(GridCacheMapEntry.java:4108)
at
org.apache.ignite.internal.processors.cache.distributed.GridDistributedCacheEntry.readyLock(GridDistributedCacheEntry.java:499)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.readyLocks(GridDhtLockFuture.java:567)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.map(GridDhtLockFuture.java:764)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter.lockAllAsyncInternal(GridDhtTransactionalCacheAdapter.java:864)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.obtainLockAsync(GridDhtTxLocalAdapter.java:693)
at

Re: How to add new nodes to a running cluster?

2019-01-09 Thread Justin Ji
Do you add this node to the topology?

https://apacheignite.readme.io/docs/baseline-topology



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite kv/sql features

2019-01-09 Thread summasumma
Hi all,

Can you please clarify the following possibilities in Ignite?
1. Insert multiple entries in cache KV store then is it possible to retrieve
selected rows based on the particular column using a SQL query on the same
KV store cache? (i.e, insert using kv operation but read using sql query)

2. If i want to retrieve all the entries matching one single column(which is
not the key), then we should use SCAN operation or 
its better to create another cache with that single column as key and
perform HGETALL on that separate cache? Assuming memory availability is not
an issue what is the performance impact on this methods?

3. In a Ignite cluster, if one node goes down, how much time it takes to
rebalance the entries amond remaining nodes? Is there any CLI to validate if
indeed the entries rebalance is over or not? as a client not will it get
events indication like - that one of node in cluster is down/ rebalance
started / rebalance done etc?

4. Is it possible to use compute functionality in Ignite to insert a single
entry into multiple cache asynchronously ? Say insert "key1, {val1, val2}"
is the record, i want this element to be inserted in cache1 with key1 as key
for that row and also insert the same in to another 'cache2' with 'val1' as
key. Idea is to call one insert request from client to ignite server cluster
but it should result in multiple insertion to multiple tables based on say a
custom compute functionality in server?

Please clarify.

Thanks,
...summa



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Does Ignite message support C++?

2019-01-09 Thread Denis Magda
Do continuous queries work for you?
https://apacheignite-cpp.readme.io/docs/continuous-queries

Denis

On Wed, Jan 9, 2019 at 4:08 PM SamsonLai  wrote:

> Thanks a lot
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Does Ignite message support C++?

2019-01-09 Thread SamsonLai
Thanks a lot



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite-benchmark- driver classname not found

2019-01-09 Thread radha jai
Hi ,
   I ran the ignite-benchmark on a vm.  ignite version used is 2.6.0.
   cmd:  ./benchmark-run-all.sh ../config/benchmark-remote.properties
   some of the benchmarks didnt run, saying :
   log4j:WARN No appenders could be found for logger
(org.reflections.Reflections).
   log4j:WARN Please initialize the log4j system properly.
  log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.
<19:23:39> Duplicate simple class names detected (use
fully-qualified names for execution):
<19:23:39>
 org.apache.ignite.yardstick.cache.IgniteIoTestSendAllBenchmark
<19:23:39>
 org.apache.ignite.yardstick.io.IgniteIoTestSendRandomBenchmark
ERROR: Could not find benchmark driver class name in classpath:
IgnitePutTxOffHeapValuesBenchmark.
Make sure class name is specified correctly and corresponding package is
added to -p argument list.
Type '--help' for usage.


I couldnt able to run below benchmarks due to above error
IgnitePutTxOffHeapValuesBenchmark
IgnitePutTxOffHeapBenchmark
IgniteSqlQueryJoinOffHeapBenchmark
IgnitePutOffHeapBenchmark
IgnitePutOffHeapValuesBenchmark
IgnitePutGetOffHeapValuesBenchmark
IgniteSqlQueryOffHeapBenchmark

Thanks
With Regards
Radha


Re: Graph Query Integration

2019-01-09 Thread Manu
Hi! take a look to
https://github.com/hawkore/examples-apache-ignite-extensions/ they are
implemented a solution for persisted lucene and spatial indexes



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Text Query question

2019-01-09 Thread Manu
Hi! take a look to
https://github.com/hawkore/examples-apache-ignite-extensions/ they are
implemented a solution for persisted lucene and spatial indexes



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Extra console output from logs.

2019-01-09 Thread javadevmtl
More precisely this is what we see...

This line is good:
{"appTimestamp":"2019-01-09T18:29:34.298+00:00","threadName":"vert.x-worker-thread-0","level":"INFO","loggerName":"org.apache.ignite.internal.IgniteKernal%xx-dev","message":"\n\n>>>
   
__    \n>>>   /  _/ ___/ |/ /  _/_  __/ __/  \n>>> 
_/ // (7 7// /  / / / _/\n>>> /___/\\___/_/|_/___/ /_/ /___/   \n>>>
\n>>> ver. 2.7.0#20181130-sha1:256ae401\n>>> 2018 Copyright(C) Apache
Software Foundation\n>>> \n>>> Ignite documentation:
http://ignite.apache.org\n"}

The below shouldn't print:
[13:29:34]__   
[13:29:34]   /  _/ ___/ |/ /  _/_  __/ __/ 
[13:29:34]  _/ // (7 7// /  / / / _/   
[13:29:34] /___/\___/_/|_/___/ /_/ /___/  
[13:29:34] 
[13:29:34] ver. 2.7.0#20181130-sha1:256ae401
[13:29:34] 2018 Copyright(C) Apache Software Foundation
[13:29:34] 
[13:29:34] Ignite documentation: http://ignite.apache.org
[13:29:34] 
[13:29:34] Quiet mode.
[13:29:34]   ^-- Logging by 'Slf4jLogger
[impl=Logger[o.a.i.i.IgniteKernal%xx-dev], quiet=true]'
[13:29:34]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[13:29:34] 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Extra console output from logs.

2019-01-09 Thread javadevmtl
Hi using 2.3.7 and Slf4J and a Json encoder so all our logs can print as
Json.

We setup the logger as follows...

IgniteLogger log = new Slf4jLogger();
igniteConfig.setGridLogger(log);


But we have noticed that there is duplicate output from Ignite to the
console in the desired Json and plain console output.

I remember reading somewhere there a bug related to this?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Failed to wait for partition map exchange on cluster activation

2019-01-09 Thread Andrey Davydov

Hello, 
I found in test logs of my project that Ignite warns about failed partition 
maps exchange. In test environment 3 Ignite 2.7 server nodes run in the same 
JVM8 on Win10, using localhost networking.

2019-01-09 20:15:27,719 [sys-#164%TestNode-2%] INFO  
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Affinity changes applied in 10 ms.
2019-01-09 20:15:27,719 [sys-#163%TestNode-1%] INFO  
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Affinity changes applied in 10 ms.
2019-01-09 20:15:27,724 [sys-#164%TestNode-2%] INFO  
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Full map updating for 5 groups performed in 4 ms.
2019-01-09 20:15:27,724 [sys-#163%TestNode-1%] INFO  
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Full map updating for 5 groups performed in 5 ms.
2019-01-09 20:15:27,725 [sys-#163%TestNode-1%] INFO  
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3, 
minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1], 
err=null]
2019-01-09 20:15:27,725 [sys-#164%TestNode-2%] INFO  
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3, 
minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1], 
err=null]
2019-01-09 20:15:28,710 [db-checkpoint-thread-#157%TestNode-1%] INFO  
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint started [checkpointId=443748a9-c1a5-4b3b-96e4-04a0862829ec, 
startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143], 
checkpointLockWait=0ms, checkpointLockHoldTime=6ms, 
walCpRecordFsyncDuration=248ms, pages=204, reason='node started']
2019-01-09 20:15:28,713 [db-checkpoint-thread-#151%TestNode-0%] INFO  
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint started [checkpointId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b, 
startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143], 
checkpointLockWait=0ms, checkpointLockHoldTime=8ms, 
walCpRecordFsyncDuration=257ms, pages=204, reason='node started']
2019-01-09 20:15:28,715 [db-checkpoint-thread-#146%TestNode-2%] INFO  
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint started [checkpointId=ef4c3d02-ca01-4d67-8128-48d4dc99aabc, 
startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143], 
checkpointLockWait=0ms, checkpointLockHoldTime=22ms, 
walCpRecordFsyncDuration=289ms, pages=204, reason='node started']
2019-01-09 20:15:30,788 [db-checkpoint-thread-#157%TestNode-1%] INFO  
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint finished [cpId=443748a9-c1a5-4b3b-96e4-04a0862829ec, pages=204, 
markPos=FileWALPointer [idx=0, fileOff=929726, len=31143], 
walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1103ms, 
pagesWrite=84ms, fsync=1992ms, total=3179ms]
2019-01-09 20:15:30,858 [db-checkpoint-thread-#151%TestNode-0%] INFO  
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint finished [cpId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b, pages=204, 
markPos=FileWALPointer [idx=0, fileOff=929726, len=31143], 
walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1213ms, 
pagesWrite=79ms, fsync=2066ms, total=3358ms]
2019-01-09 20:15:30,998 [db-checkpoint-thread-#146%TestNode-2%] INFO  
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint finished [cpId=ef4c3d02-ca01-4d67-8128-48d4dc99aabc, pages=204, 
markPos=FileWALPointer [idx=0, fileOff=929726, len=31143], 
walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1262ms, 
pagesWrite=79ms, fsync=2203ms, total=3544ms]
2019-01-09 20:15:37,510 [exchange-worker-#44%TestNode-0%] WARN  
org.apache.ignite.internal.diagnostic:118 - Failed to wait for partition map 
exchange [topVer=AffinityTopologyVersion [topVer=3, minorTopVer=1], 
node=454d2051-cea6-4f2c-99a7-7c5698494175]. Dumping pending objects that might 
be the cause: 
2019-01-09 20:15:37,510 [exchange-worker-#44%TestNode-0%] WARN  
org.apache.ignite.internal.diagnostic:118 - Ready affinity version: 
AffinityTopologyVersion [topVer=-1, minorTopVer=0]
2019-01-09 20:15:37,515 [exchange-worker-#44%TestNode-0%] WARN  
org.apache.ignite.internal.diagnostic:118 - Last exchange future: …
2019-01-09 20:15:37,515 [exchange-worker-#44%TestNode-0%] WARN  
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager:118
 - First 10 pending exchange futures [total=0]
2019-01-09 20:15:37,518 

Re: SqlQuery retrieves same cache entry twice. ScanQuery results conflicts with indentical SqlQuery

2019-01-09 Thread oshevchenko
Hi Ilya,

Thanks for quick reply on my problem. I am running 2.5. Looks like issue i
have has to do with  IGNITE-8900
  . I keep my fingers
crossed that this serious issue is gone with 2.7.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SqlQuery retrieves same cache entry twice. ScanQuery results conflicts with indentical SqlQuery

2019-01-09 Thread oshevchenko
Hi Ilya,

Thanks a lot for your reply. I am running 2.5. Looks like my problem has to
do with  IGNITE-8900   
which should be fixed for 2.7. Keep my fingers crossed that 2.7 fixes this
serious issue




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to setup multi host node discovery

2019-01-09 Thread newigniter
Tnx for your help. Below is my config. Did you mean something like this?

http://www.springframework.org/schema/beans;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd;>























127.0.0.1



[ec2 ip 
address]:47500..47509










[ec2 ip address]:47500..47509 is the ip address of the ec2 instance where
first node was started. If I understood correctly, it is enough to provide
only one ip address? 

I did that and using this configuration I started 2nd node.

I connect to my first node and execute:
./control.sh --user ignite --password ignite --state -> CLUSTER ACTIVE
./control.sh --user ignite --password ignite --baseline -> only first node
is found
I connect to my second node and execute:
./control.sh --user ignite --password ignite --state -> CLUSTER IS INACTIVE






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to add new nodes to a running cluster?

2019-01-09 Thread Ilya Kasnacheev
Hello!

Can you provide config and log?

Note that it if will join cluster, you will need to add it to baseline
topology.

Regards,
-- 
Ilya Kasnacheev


вт, 8 янв. 2019 г. в 16:08, yangjiajun <1371549...@qq.com>:

> Hello.
>
> I try to add a node with persistence enable to a running cluster.I provide
> the ip address list of the existing cluster to the new node,but it does not
> communicate with the cluster.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: backup mode

2019-01-09 Thread Ilya Kasnacheev
Hello!

I don't think so! You will need to create a different cache with new
configuration.

Regards,
-- 
Ilya Kasnacheev


пн, 7 янв. 2019 г. в 14:41, Som Som <2av10...@gmail.com>:

> hello.
>
> is it posible to change backup from 0 to 1 in existing cache?
>


Re: Getting javax.cache.CacheException after upgrading to Ignite 2.7

2019-01-09 Thread Prasad Bhalerao
Hi Ilya,

I have created a reproducer for this issue and uploaded it to GitHub.

GitHub project: https://github.com/prasadbhalerao1983/IgniteTestPrj.git

Please run IgniteTransactionTester class to check the issue.


Exception:

Exception in thread "main" javax.cache.CacheException: Only pessimistic
repeatable read transactions are supported at the moment.
 at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697)
 at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:636)
 at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:388)
 at
IgniteTransactionTester.testTransactionException(IgniteTransactionTester.java:53)
 at IgniteTransactionTester.main(IgniteTransactionTester.java:38)
Caused by: class
org.apache.ignite.internal.processors.query.IgniteSQLException: Only
pessimistic repeatable read transactions are supported at the moment.
 at
org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:690)
 at
org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:671)
 at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.runQueryTwoStep(IgniteH2Indexing.java:1793)
 at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunDistributedQuery(IgniteH2Indexing.java:2610)
 at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunPrepared(IgniteH2Indexing.java:2315)
 at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:2209)
 at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2135)
 at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2130)
 at
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
 at
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2707)
 at
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2144)
 at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:685)

Thanks,

Prasad



On Wed, Jan 9, 2019 at 6:22 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> It was discussed recently:
> http://apache-ignite-users.70518.x6.nabble.com/Migrate-from-2-6-to-2-7-td25738.html
>
> I don't think you will be able to use SQL from transactions in Ignite 2.7.
> While this looks like a regression, you will have to work around it for now.
>
> Do you have a small reproducer for this issue? I could file a ticket if
> you had. You can try to do it yourself, too.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 9 янв. 2019 г. в 15:33, Prasad Bhalerao  >:
>
>> Hi,
>>
>> My cache configuration is as follows. I am using TRANSACTIONAL and not
>> TRANSACTIONAL_SNAPSHOT.
>>
>>
>>
>> private CacheConfiguration ipContainerIPV4CacheCfg() {
>>
>>   CacheConfiguration ipContainerIpV4CacheCfg = new 
>> CacheConfiguration<>(CacheName.IP_CONTAINER_IPV4_CACHE.name());
>>   ipContainerIpV4CacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>>   ipContainerIpV4CacheCfg.setWriteThrough(ENABLE_WRITE_THROUGH);
>>   ipContainerIpV4CacheCfg.setReadThrough(false);
>>   ipContainerIpV4CacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>>   
>> ipContainerIpV4CacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>>   ipContainerIpV4CacheCfg.setBackups(1);
>>   Factory storeFactory = 
>> FactoryBuilder.factoryOf(IpContainerIpV4CacheStore.class);
>>   ipContainerIpV4CacheCfg.setCacheStoreFactory(storeFactory);
>>   ipContainerIpV4CacheCfg.setIndexedTypes(DefaultDataAffinityKey.class, 
>> IpContainerIpV4Data.class);
>>   
>> ipContainerIpV4CacheCfg.setCacheStoreSessionListenerFactories(cacheStoreSessionListenerFactory());
>>   ipContainerIpV4CacheCfg.setSqlIndexMaxInlineSize(84);
>>   RendezvousAffinityFunction affinityFunction = new 
>> RendezvousAffinityFunction();
>>   affinityFunction.setExcludeNeighbors(true);
>>   ipContainerIpV4CacheCfg.setAffinity(affinityFunction);
>>   ipContainerIpV4CacheCfg.setStatisticsEnabled(true);
>>
>>   return ipContainerIpV4CacheCfg;
>> }
>>
>>
>> Thanks,
>> Prasad
>>
>> On Wed, Jan 9, 2019 at 5:45 PM Павлухин Иван  wrote:
>>
>>> Hi Prasad,
>>>
>>> > javax.cache.CacheException: Only pessimistic repeatable read
>>> transactions are supported at the moment.
>>> Exception mentioned by you should happen only for cache with
>>> TRANSACTIONAL_SNAPSHOT atomicity mode configured. Have you configured
>>> TRANSACTIONAL_SNAPSHOT atomicity for any cache? As Denis mentioned
>>> there are number of bugs related to TRANSACTIONAL_SNAPSHOT, e.g. [1].
>>>
>>> [1] https://issues.apache.org/jira/browse/IGNITE-10520
>>>
>>> вс, 6 янв. 2019 г. в 20:03, Denis Magda :
>>> >
>>> > Hello,
>>> >
>>> > Ignite versions prior to 2.7 never supported transactions for SQL

Re: How to setup multi host node discovery

2019-01-09 Thread Ilya Kasnacheev
Hello!

You can just list all nodes' IPs in configuration of each node, or use S3
discovery in case of AWS.

Regards,
-- 
Ilya Kasnacheev


ср, 9 янв. 2019 г. в 17:14, newigniter :

> Greetings.
>
> I am new to apache ignite and I would like to set up multiple ignite nodes
> each running on separate ec2 instance inside docker container.
> When I start the first ignite node, the cluster is created and now I have
> one node in my cluster.
> I create now new ec2 instance and start new ignite node there inside docker
> container. How should my configuration look like in order for both of the
> nodes joined the same cluster?
>
> I went through Ignite Documentation for TCP/IP Discovery but can't
> configure
> this to work.
>
> Appreciate any help!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


How to setup multi host node discovery

2019-01-09 Thread newigniter
Greetings.

I am new to apache ignite and I would like to set up multiple ignite nodes
each running on separate ec2 instance inside docker container. 
When I start the first ignite node, the cluster is created and now I have
one node in my cluster.
I create now new ec2 instance and start new ignite node there inside docker
container. How should my configuration look like in order for both of the
nodes joined the same cluster?

I went through Ignite Documentation for TCP/IP Discovery but can't configure
this to work.

Appreciate any help!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: select count () 20 times slower than select count(*)

2019-01-09 Thread Ilya Kasnacheev
Hello!

Index access may be slower than full scan since it needs more lookups.

It is unfortunate that Ignite's query planner cannot figure this out, but
it is not without explanation.

Regards,
-- 
Ilya Kasnacheev


вт, 8 янв. 2019 г. в 17:27, David :

> Hi all,
>
> I have a simple Table on a one node cluster.
> Have filled it with 100mil lines of random data.
>
> but i dont understand why a
> select count(*) from people; is 20 times faster than a
> select count(id) from people; where id is indexed
>
> is there a reason for this?
> below table structure and explains
>
>  8.246 seconds execution time
> 0: jdbc:ignite:thin://127.0.0.1/> select count(*) from people;
> ++
> |COUNT(*)|
> ++
> | 3600   |
> ++
> 1 row selected (8.246 seconds)
>
> 0: jdbc:ignite:thin://127.0.0.1/> explain select count(*) from people;
> ++
> |  PLAN  |
> ++
> | SELECT
> COUNT(*) AS __C0_0
> FROM PUBLIC.PEOPLE __Z0
> /* PUBLIC.PEOPLE.__SCAN_ */
> /* direct lookup */ |
> | SELECT
> CAST(SUM(__C0_0) AS BIGINT) AS __C0_0
> FROM PUBLIC.__T0
> /* PUBLIC."merge_scan" */ |
> ++
> 2 rows selected (0.005 seconds)
>
>  136.719 seconds executin time
> 0: jdbc:ignite:thin://127.0.0.1/> select count(id) from people;
> ++
> |   COUNT(ID)|
> ++
> | 3600   |
> ++
> 1 row selected (136.719 seconds)
>
>
> explain select count(id) from people;
> ++
> |  PLAN  |
> ++
> | SELECT
> COUNT(__Z0.ID) AS __C0_0
> FROM PUBLIC.PEOPLE __Z0
> /* PUBLIC."_key_PK_proxy" */ |
> | SELECT
> CAST(SUM(__C0_0) AS BIGINT) AS __C0_0
> FROM PUBLIC.__T0
> /* PUBLIC."merge_scan" */ |
> ++
> 2 rows selected (0.004 seconds)
>
>
> sql = "CREATE TABLE IF NOT EXISTS People " +
> "(id BIGINT, " +
> "first_name varchar(20), " +
> "last_name varchar(20), " +
> "age int, " +
> "current_city_id int, " +
> "born_city_id int, " +
> "gender varchar(1), " +
> "PRIMARY KEY(id)) " +
> "WITH \"template=PARTITIONED\""
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: After upgrading 2.7 getting Unexpected error occurred during unmarshalling

2019-01-09 Thread Ilya Kasnacheev
Hello!

Do you have a reproducer project to reliably confirm this issue?

Regards,
-- 
Ilya Kasnacheev


ср, 9 янв. 2019 г. в 12:39, Akash Shinde :

> Added  d...@ignite.apache.org.
>
> Should I log Jira for this issue?
>
> Thanks,
> Akash
>
>
>
> On Tue, Jan 8, 2019 at 6:16 PM Akash Shinde  wrote:
>
> > Hi,
> >
> > No both nodes, client and server are running on Ignite 2.7 version. I am
> > starting both server and client from Intellij IDE.
> >
> > Version printed in Server node log:
> > Ignite ver. 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
> >
> > Version in client node log:
> > Ignite ver. 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
> >
> > Thanks,
> > Akash
> >
> > On Tue, Jan 8, 2019 at 5:18 PM Mikael  wrote:
> >
> >> Hi!
> >>
> >> Any chance you might have one node running 2.6 or something like that ?
> >>
> >> It looks like it get a different object that does not match the one
> >> expected in 2.7
> >>
> >> Mikael
> >> Den 2019-01-08 kl. 12:21, skrev Akash Shinde:
> >>
> >> Before submitting the affinity task ignite first gets the affinity
> cached
> >> function (AffinityInfo) by submitting the cluster wide task
> "AffinityJob".
> >> But while in the process of retrieving the output of this AffinityJob,
> >> ignite deserializes this output. I am getting exception while
> deserailizing
> >> this output.
> >> In TcpDiscoveryNode.readExternal() method while deserailizing the
> >> CacheMetrics object from input stream on 14th iteration I am getting
> >> following exception. Complete stack trace is given in this mail chain.
> >>
> >> Caused by: java.io.IOException: Unexpected error occurred during
> >> unmarshalling of an instance of the class:
> >> org.apache.ignite.internal.processors.cache.CacheMetricsSnapshot.
> >>
> >> This is working fine on Ignite 2.6 version but giving problem on 2.7.
> >>
> >> Is this a bug or am I doing something wrong?
> >>
> >> Can someone please help?
> >>
> >> On Mon, Jan 7, 2019 at 9:41 PM Akash Shinde 
> >> wrote:
> >>
> >>> Hi,
> >>>
> >>> When execute affinity.partition(key), I am getting following exception
> >>> on Ignite  2.7.
> >>>
> >>> Stacktrace:
> >>>
> >>> 2019-01-07 21:23:03,093 6699878 [mgmt-#67%springDataNode%] ERROR
> >>> o.a.i.i.p.task.GridTaskWorker - Error deserializing job response:
> >>> GridJobExecuteResponse [nodeId=c0c832cb-33b0-4139-b11d-5cafab2fd046,
> >>> sesId=4778e982861-31445139-523d-4d44-b071-9ca1eb2d73df,
> >>> jobId=5778e982861-31445139-523d-4d44-b071-9ca1eb2d73df, gridEx=null,
> >>> isCancelled=false, retry=null]
> >>> org.apache.ignite.IgniteCheckedException: Failed to unmarshal object
> >>> with optimized marshaller
> >>>  at
> >>>
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10146)
> >>>  at
> >>>
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:831)
> >>>  at
> >>>
> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1081)
> >>>  at
> >>>
> org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1316)
> >>>  at
> >>>
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
> >>>  at
> >>>
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
> >>>  at
> >>>
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
> >>>  at
> >>>
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
> >>>  at
> >>>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >>>  at
> >>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >>>  at java.lang.Thread.run(Thread.java:748)
> >>> Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to
> >>> unmarshal object with optimized marshaller
> >>>  at
> >>>
> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1765)
> >>>  at
> >>>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964)
> >>>  at
> >>>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> >>>  at
> >>>
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:313)
> >>>  at
> >>>
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:102)
> >>>  at
> >>>
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
> >>>  at
> >>>
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10140)
> >>>  ... 10 common frames omitted
> >>> Caused by: org.apache.ignite.IgniteCheckedException: Failed to
> >>> deserialize object with given class loader:
> >>> [clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, 

Re: SqlQuery retrieves same cache entry twice. ScanQuery results conflicts with indentical SqlQuery

2019-01-09 Thread Ilya Kasnacheev
Hello!

What is the Ignite version that you are using? Can you re-try using 2.7 if
you're using an earlier one?

Regards,
-- 
Ilya Kasnacheev


вс, 6 янв. 2019 г. в 20:54, oshevchenko :

> Met very strange SqlQuery when executing simple query on partitioned. The
> problem that the same cache entry is retrieved twice. Looks like first time
> entry gets retrieved from primary partition and second time it is taken
> from
> back up. Another problem that same  ScanQuery gives correct results. Code i
> run to this behavior:
> public class UnitDupKeyProblemTests {
> public static void main(String[] args) {
> try (Ignite ignite = Ignition.start("ignite-client-conf-unit.xml"))
> {
> Collection results = ignite.compute().broadcast(new
>
> TestCallable("ENT_LST_0004_20180630_14009_99_9__23_USD_9_9_99_2018_9_9_LN00025_99_99"));
> results.forEach(System.out::println);
> }
> }
>
> private static class BiPredicateFilter implements
> IgniteBiPredicate {
> @Override
> public boolean apply(String string, Balance balance) {
> return balance.getTransType().equals("23");
> }
> }
>
> private static class Result implements Serializable {
> int timesFound;
> int nonPrimary;
> Collection addresses;
> UUID nodeUid;
> boolean primary;
> boolean backUp;
>
> public Result(int timesFound, int nonPrimary, Collection
> addresses, UUID nodeUid, boolean primary, boolean backUp) {
> this.timesFound = timesFound;
> this.addresses = addresses;
> this.nodeUid = nodeUid;
> this.primary = primary;
> this.nonPrimary = nonPrimary;
> this.backUp = backUp;
> }
>
> public int getTimesFound() {
> return timesFound;
> }
>
> @Override
> public String toString() {
> return "Result{" +
> "timesFound=" + timesFound +
> ", nonPrimary=" + nonPrimary +
> ", addresses=" + addresses +
> ", nodeUid=" + nodeUid +
> ", primary=" + primary +
> ", backUp=" + backUp +
> '}';
> }
> }
>
> private static class TestCallable implements IgniteCallable {
>
> @IgniteInstanceResource
> transient Ignite ignite;
>
> private final String keyToTest;
>
> public TestCallable(String keyToTest) {
> this.keyToTest = keyToTest;
> }
>
> @Override
> public Result call() throws Exception {
> final IgniteCache cache =
> ignite.cache("BALANCE");
> final ClusterNode clusterNode = ignite.cluster().localNode();
> //final ScanQuery query = new ScanQuery<>(new
> BiPredicateFilter());
> final SqlQuery query = new
> SqlQuery<>(Balance.class, "transType='23'");
>
>
> query.setLocal(true);
> int num = 0;
> int nonPrimary = 0;
> try (final QueryCursor> cursor
> =
> cache.query(query)) {
> for (Cache.Entry entry : cursor) {
> if (cache.localPeek(entry.getKey(),
> CachePeekMode.PRIMARY) == null) {
> nonPrimary++;
> }
>
> if (keyToTest.equals(entry.getKey())) {
> num++;
> }
> }
> }
>
> ignite.affinity("BALANCE").isPrimary(clusterNode, keyToTest);
> return new Result(num, nonPrimary, clusterNode.addresses(),
> clusterNode.id(),
> ignite.affinity("BALANCE").isPrimary(clusterNode,
> keyToTest),
> ignite.affinity("BALANCE").isBackup(clusterNode,
> keyToTest));
> }
> }
> }
>
> Output that proves my assumptions is:
> Result{timesFound=1, nonPrimary=4, addresses=[127.0.0.1, 48.124.176.58],
> nodeUid=0d45348c-2e94-4ac3-b9aa-b61fbdd56749, primary=false, backUp=true}
> Result{timesFound=0, nonPrimary=5, addresses=[127.0.0.1, 48.124.184.19],
> nodeUid=b5161591-7d62-4c0d-a866-7610b3665760, primary=false, backUp=false}
> Result{timesFound=0, nonPrimary=2, addresses=[127.0.0.1, 48.124.176.57],
> nodeUid=da3da1d3-a4f0-4273-8fac-f4ba479ae211, primary=false, backUp=false}
> Result{timesFound=1, nonPrimary=2, addresses=[127.0.0.1, 48.124.184.20],
> nodeUid=39203ddb-2c6b-4247-84bf-22f380130711, primary=true, backUp=false}
>
> if switch to scanquery output looks correct:
> Result{timesFound=0, nonPrimary=0, addresses=[127.0.0.1, 48.124.176.58],
> nodeUid=0d45348c-2e94-4ac3-b9aa-b61fbdd56749, primary=false, backUp=true}
> Result{timesFound=1, nonPrimary=0, addresses=[127.0.0.1, 48.124.184.20],
> nodeUid=39203ddb-2c6b-4247-84bf-22f380130711, primary=true, backUp=false}
> Result{timesFound=0, nonPrimary=0, addresses=[127.0.0.1, 

Not able to load data from Cassandra database to Ignite Cache.

2019-01-09 Thread Kiran Kumar
Configured three xml files, one for cassandra connections, one for
persistence and one for default.xml where both cassandra and persistence
bean ids configured and also updated cachestore configuration.

I was able to save data to cassandra using *cache.put*.

But here the requirement is first I need to load the data from cassandra
database to ignite cache and then need to perform dataframe streaming and
then save new data to cassandra from ignite cache.

Is there any way to add dynamic POJO's in Keypersistence ??



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to save data from Ignite Cache to cassandra db automatically

2019-01-09 Thread Kiran Kumar
I followed below link for verifying the Ignite Cache Implementation using
scala.

https://github.com/apache/ignite/blob/master/examples/src/main/spark/org/apache/ignite/examples/spark/IgniteDataFrameWriteExample.scala

Initially in the setupServerAndData, table is created and inserted some
values.
Later, loading content of json file to data frame and then writing content
of data frame to Ignite using .save(function).
Is there any way to save data from Ignite Cache to Cassandra database ??

Is there any way to load data from cassandra db to Ignite Cache ??







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Out of sync clocks on all ignite nodes/VMs

2019-01-09 Thread Ilya Kasnacheev
Hello!

Apache Ignite doesn't depend on clock too much so it should not be a
problem unless we are talking more of few minutes of divergence.

It will not trigger rebalancing nor extra partition exchanges.
Transactional processing will not be affected.

Regards,
-- 
Ilya Kasnacheev


ср, 9 янв. 2019 г. в 13:12, Prasad Bhalerao :

> Can the nodes go out of cluster If the clocks on all nodes are out of sync?
>
> Can it create issue in cluster or cluster formation?
>
> Can it trigger rebalancing process or partition exchange process
> unnecessarily on nodes?
>
> Thanks,
> Prasad
>


Re: Getting javax.cache.CacheException after upgrading to Ignite 2.7

2019-01-09 Thread Prasad Bhalerao
Hi,

My cache configuration is as follows. I am using TRANSACTIONAL and not
TRANSACTIONAL_SNAPSHOT.



private CacheConfiguration ipContainerIPV4CacheCfg() {

  CacheConfiguration ipContainerIpV4CacheCfg = new
CacheConfiguration<>(CacheName.IP_CONTAINER_IPV4_CACHE.name());
  ipContainerIpV4CacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
  ipContainerIpV4CacheCfg.setWriteThrough(ENABLE_WRITE_THROUGH);
  ipContainerIpV4CacheCfg.setReadThrough(false);
  ipContainerIpV4CacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
  
ipContainerIpV4CacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
  ipContainerIpV4CacheCfg.setBackups(1);
  Factory storeFactory =
FactoryBuilder.factoryOf(IpContainerIpV4CacheStore.class);
  ipContainerIpV4CacheCfg.setCacheStoreFactory(storeFactory);
  ipContainerIpV4CacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
IpContainerIpV4Data.class);
  
ipContainerIpV4CacheCfg.setCacheStoreSessionListenerFactories(cacheStoreSessionListenerFactory());
  ipContainerIpV4CacheCfg.setSqlIndexMaxInlineSize(84);
  RendezvousAffinityFunction affinityFunction = new
RendezvousAffinityFunction();
  affinityFunction.setExcludeNeighbors(true);
  ipContainerIpV4CacheCfg.setAffinity(affinityFunction);
  ipContainerIpV4CacheCfg.setStatisticsEnabled(true);

  return ipContainerIpV4CacheCfg;
}


Thanks,
Prasad

On Wed, Jan 9, 2019 at 5:45 PM Павлухин Иван  wrote:

> Hi Prasad,
>
> > javax.cache.CacheException: Only pessimistic repeatable read
> transactions are supported at the moment.
> Exception mentioned by you should happen only for cache with
> TRANSACTIONAL_SNAPSHOT atomicity mode configured. Have you configured
> TRANSACTIONAL_SNAPSHOT atomicity for any cache? As Denis mentioned
> there are number of bugs related to TRANSACTIONAL_SNAPSHOT, e.g. [1].
>
> [1] https://issues.apache.org/jira/browse/IGNITE-10520
>
> вс, 6 янв. 2019 г. в 20:03, Denis Magda :
> >
> > Hello,
> >
> > Ignite versions prior to 2.7 never supported transactions for SQL
> queries. You were enlisting SQL in transactions for your own risk. Ignite
> version 2.7 introduced true transactional support for SQL based on MVCC.
> Presently it's in beta with GA to be available around Q2-Q3 this year. The
> community is working on optimizations.
> >
> > Please refer to this docs for more details:
> > https://apacheignite.readme.io/docs/multiversion-concurrency-control
> > https://apacheignite-sql.readme.io/docs/transactions
> > https://apacheignite-sql.readme.io/docs/multiversion-concurrency-control
> >
> > --
> > Denis
> >
> > On Sat, Jan 5, 2019 at 7:48 PM Prasad Bhalerao <
> prasadbhalerao1...@gmail.com> wrote:
> >>
> >> Can someone please explain if anything has changed in ignite 2.7.
> >>
> >> Started getting this exception after upgrading to 2.7.
> >>
> >>
> >> -- Forwarded message -
> >> From: Prasad Bhalerao 
> >> Date: Fri 4 Jan, 2019, 8:41 PM
> >> Subject: Re: Getting javax.cache.CacheException after upgrading to
> Ignite
> >> 2.7
> >> To: 
> >>
> >>
> >> Can someone please help me with this?
> >>
> >> On Thu 3 Jan, 2019, 7:15 PM Prasad Bhalerao <
> prasadbhalerao1...@gmail.com
> >> wrote:
> >>
> >> > Hi
> >> >
> >> > After upgrading to 2.7 version I am getting following exception. I am
> >> > executing a SELECT sql inside optimistic transaction with
> serialization
> >> > isolation level.
> >> >
> >> > 1) Has anything changed from 2.6 to 2.7 version?  This work fine
> prior to
> >> > 2.7 version.
> >> >
> >> > After changing it to Pessimistic and isolation level to
> REPEATABLE_READ it
> >> > works fine.
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > *javax.cache.CacheException: Only pessimistic repeatable read
> transactions
> >> > are supported at the moment.at
> >> >
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697)at
> >> >
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:636)at
> >> >
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:388)at
> >> >
> com.qualys.agms.grid.dao.AbstractDataGridDAO.getFieldResultsByCriteria(AbstractDataGridDAO.java:85)*
> >> >
> >> > Thanks,
> >> > Prasad
> >> >
>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>


Re: Does Ignite message support C++?

2019-01-09 Thread Igor Sapego
That's right, Ignite C++ do not support messaging currently.

Best Regards,
Igor


On Tue, Jan 8, 2019 at 3:07 AM SamsonLai  wrote:

> I have an ignite cluster that running on Java, all nodes (Java) within the
> cluster can send and receive Ignite messages. Now, I have to create another
> client node by C++, I am using Ignite 2.7, but seems that Ignite C++ does
> not supports Ignite messages, right?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Getting javax.cache.CacheException after upgrading to Ignite 2.7

2019-01-09 Thread Павлухин Иван
Hi Prasad,

> javax.cache.CacheException: Only pessimistic repeatable read transactions are 
> supported at the moment.
Exception mentioned by you should happen only for cache with
TRANSACTIONAL_SNAPSHOT atomicity mode configured. Have you configured
TRANSACTIONAL_SNAPSHOT atomicity for any cache? As Denis mentioned
there are number of bugs related to TRANSACTIONAL_SNAPSHOT, e.g. [1].

[1] https://issues.apache.org/jira/browse/IGNITE-10520

вс, 6 янв. 2019 г. в 20:03, Denis Magda :
>
> Hello,
>
> Ignite versions prior to 2.7 never supported transactions for SQL queries. 
> You were enlisting SQL in transactions for your own risk. Ignite version 2.7 
> introduced true transactional support for SQL based on MVCC. Presently it's 
> in beta with GA to be available around Q2-Q3 this year. The community is 
> working on optimizations.
>
> Please refer to this docs for more details:
> https://apacheignite.readme.io/docs/multiversion-concurrency-control
> https://apacheignite-sql.readme.io/docs/transactions
> https://apacheignite-sql.readme.io/docs/multiversion-concurrency-control
>
> --
> Denis
>
> On Sat, Jan 5, 2019 at 7:48 PM Prasad Bhalerao  
> wrote:
>>
>> Can someone please explain if anything has changed in ignite 2.7.
>>
>> Started getting this exception after upgrading to 2.7.
>>
>>
>> -- Forwarded message -
>> From: Prasad Bhalerao 
>> Date: Fri 4 Jan, 2019, 8:41 PM
>> Subject: Re: Getting javax.cache.CacheException after upgrading to Ignite
>> 2.7
>> To: 
>>
>>
>> Can someone please help me with this?
>>
>> On Thu 3 Jan, 2019, 7:15 PM Prasad Bhalerao > wrote:
>>
>> > Hi
>> >
>> > After upgrading to 2.7 version I am getting following exception. I am
>> > executing a SELECT sql inside optimistic transaction with serialization
>> > isolation level.
>> >
>> > 1) Has anything changed from 2.6 to 2.7 version?  This work fine prior to
>> > 2.7 version.
>> >
>> > After changing it to Pessimistic and isolation level to REPEATABLE_READ it
>> > works fine.
>> >
>> >
>> >
>> >
>> >
>> >
>> > *javax.cache.CacheException: Only pessimistic repeatable read transactions
>> > are supported at the moment.at
>> > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697)at
>> > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:636)at
>> > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:388)at
>> > com.qualys.agms.grid.dao.AbstractDataGridDAO.getFieldResultsByCriteria(AbstractDataGridDAO.java:85)*
>> >
>> > Thanks,
>> > Prasad
>> >



-- 
Best regards,
Ivan Pavlukhin


Re: Cluster of two nodes with minimal port use

2019-01-09 Thread Tobias König

I think I can narrow it down to this error message:

[12:38:10,957][WARNING][tcp-disco-msg-worker-#2][TcpDiscoverySpi] Failed 
to send message to next node [msg=TcpDiscoveryNodeAddedMessage 
[node=TcpDiscoveryNode [id=98c88dfd-758e-40f4-9597-eb4c4f700280, 
addrs=[172.24.10.79], sockAddrs=[/172.24.10.79:3013], discPort=3013, ... 
[snipped, full message below]


However, I can't find any information on /why/ the sending failed. I 
even started Ignite in verbose mode.


I studied many posts by users having problems forming Ignite clusters 
and from what I could gather it seems that discovery (port 3013) seems 
to work, but communication (port 3012) does not. I already checked that 
the ports are reachable from both hosts, of course.


Can somebody help?


__

[12:38:10,957][WARNING][tcp-disco-msg-worker-#2][TcpDiscoverySpi] Failed 
to send message to next node [msg=TcpDiscoveryNodeAddedMessage 
[node=TcpDiscoveryNode [id=98c88dfd-758e-40f4-9597-eb4c4f700280, 
addrs=[172.24.10.79], sockAddrs=[/172.24.10.79:3013], discPort=3013, 
order=0, intOrder=6, lastExchangeTime=1547033890878, loc=false, 
ver=2.7.0#20181130-sha1:256ae401, isClient=false], 
dataPacket=o.a.i.spi.discovery.tcp.internal.DiscoveryDataPacket@1fdbcfd, 
discardMsgId=null, discardCustomMsgId=null, top=null, clientTop=null, 
gridStartTime=1547033858511, super=TcpDiscoveryAbstractMessage 
[sndNodeId=null, id=98c37623861-9fe36b10-cda3-41bd-a91f-82e021edf5ec, 
verifierNodeId=9fe36b10-cda3-41bd-a91f-82e021edf5ec, topVer=0, 
pendingIdx=0, failedNodes=null, isClient=false]], next=TcpDiscoveryNode 
[id=98c88dfd-758e-40f4-9597-eb4c4f700280, addrs=[172.24.10.79], 
sockAddrs=[/172.24.10.79:3013], discPort=3013, order=0, intOrder=6, 
lastExchangeTime=1547033890878, loc=false, 
ver=2.7.0#20181130-sha1:256ae401, isClient=false], errMsg=Failed to send 
message to next node [msg=TcpDiscoveryNodeAddedMessage 
[node=TcpDiscoveryNode [id=98c88dfd-758e-40f4-9597-eb4c4f700280, 
addrs=[172.24.10.79], sockAddrs=[/172.24.10.79:3013], discPort=3013, 
order=0, intOrder=6, lastExchangeTime=1547033890878, loc=false, 
ver=2.7.0#20181130-sha1:256ae401, isClient=false], 
dataPacket=o.a.i.spi.discovery.tcp.internal.DiscoveryDataPacket@1fdbcfd, 
discardMsgId=null, discardCustomMsgId=null, top=null, clientTop=null, 
gridStartTime=1547033858511, super=TcpDiscoveryAbstractMessage 
[sndNodeId=null, id=98c37623861-9fe36b10-cda3-41bd-a91f-82e021edf5ec, 
verifierNodeId=9fe36b10-cda3-41bd-a91f-82e021edf5ec, topVer=0, 
pendingIdx=0, failedNodes=null, isClient=false]], next=ClusterNode 
[id=98c88dfd-758e-40f4-9597-eb4c4f700280, order=0, addr=[172.24.10.79], 
daemon=false]]]



On 1/8/19 3:13 PM, Tobias König wrote:

Hi there,

I'm trying to get an Ignite cluster consisting of two nodes to work, 
that uses a minimum number of exposed ports. I'm new to Ignite, but it 
is my understanding, that it should suffice to set each node to one 
specific port 1. for communication and 2. for discovery. The overall 
goal is to get a Docker cluster (with default bridged networking) 
working without Multicast and without --net=host.


However, I'm doing preliminary tests /without/ docker and am directly 
using my local machine (Node 1, IP 172.24.10.79) and a Raspberry Pi 
(Node 2, IP 172.24.10.83), and I can't get the cluster to work, 
because the discovery process doesn't succeed. I'm using a static IP 
finder in which I point each node to its corresponding counterpart.


XML-configuration of both nodes with the aforementioned minimal use of 
ports is attached inline.


If I start node 1 first and then node 2, no discovery process is 
initiated in the first minutes. If I start node 2 first and then node 
1, the discovery process is initiated but not completed successfully. 
I'll attach logs for the second case for both node 2 and 1.


Can somebody spot my configuration error?

Best regards and TIA,
Tobias



P.S. I was able to reproduce the error on two "regular" machines as 
well, without the use of a Raspberry Pi.



___

# ignite-config-node1.xml

http://www.springframework.org/schema/beans;
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
    xsi:schemaLocation="
    http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>

    
    
    class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">

    
    
    class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">

    
    
127.0.0.1:3013
172.24.10.83:3013
    
    
    
    
    
    
    
    class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">

    
    
    
    



# ignite-config-node2.xml

http://www.springframework.org/schema/beans;
    

Re: JDK 11 support

2019-01-09 Thread Petr Ivanov
Currently, compilation is not supported, efforts are about providing runtime 
compatibility.
I hope full support (compilation + runtime) will be introduced in 2.8 (and not 
later than 3.0).

> On 9 Jan 2019, at 11:27, zaleslaw  wrote:
> 
> I haven't any troubles with running Ignite 2.6 with JDK 8 and Ignite 2.7 with
> JDK 8,9.
> But a few weeks ago it [Ignite 2.6/Ignite 2.7) doesn't compile with JDK 11
> (Oracle).
> 
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Out of sync clocks on all ignite nodes/VMs

2019-01-09 Thread Prasad Bhalerao
Can the nodes go out of cluster If the clocks on all nodes are out of sync?

Can it create issue in cluster or cluster formation?

Can it trigger rebalancing process or partition exchange process
unnecessarily on nodes?

Thanks,
Prasad


Re: Distributed Training in tensorflow

2019-01-09 Thread dmitrievanthony
Let me also add that it depends on what you want to achieve. TensorFlow
supports distributed training and it does it on it's own. But if you use
pure TensorFlow you'll have to start TensorFlow workers manually and
distribute data manually as well. And you can do it, I mean start workers
manually on the nodes Ignite cluster occupies or even some other nodes. It
will work and perhaps work well in some cases and work very well in case of
accurate manual setup.

At the same time, Apache Ignite provides a cluster management functionality
for TensorFlow that allows to start workers automatically on the same nodes
Apache Ignite keeps the data. From our perspective it's the most efficient
way to setup TensorFlow cluster on top of Apache Ignite cluster because it
allows to reduce data transfers. You can find more details about this in
readme: https://apacheignite.readme.io/docs/ignite-dataset and
https://apacheignite.readme.io/docs/tf-command-line-tool.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Distributed Training in tensorflow

2019-01-09 Thread zaleslaw
Dear Mehdi Sey

First of all, we should have running Ignite cluster with a dataset loaded
into caches.

NOTE: This dataset could be reached via "from tensorflow.contrib.ignite
import IgniteDataset" in your Jupiter Notebook.

In the second, we shouldn't forget about tf.device("...") call 

The whole documentation could be found  here

  

Short answer: Yes, we must





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cluster of two nodes with minimal port use

2019-01-09 Thread Tobias König

Hi Stephen,

I tested your proposal, but to no avail. The discovery process is still 
continuously retried, but never successful.


Best regards,
Tobias



On 1/8/19 4:07 PM, Stephen Darlington wrote:

Try putting the same list on both nodes:

172.24.10.79:3013
172.24.10.83:3013

Regards,
Stephen


On 8 Jan 2019, at 14:13, Tobias König  wrote:

Hi there,

I'm trying to get an Ignite cluster consisting of two nodes to work, that uses 
a minimum number of exposed ports. I'm new to Ignite, but it is my 
understanding, that it should suffice to set each node to one specific port 1. 
for communication and 2. for discovery. The overall goal is to get a Docker 
cluster (with default bridged networking) working without Multicast and without 
--net=host.

However, I'm doing preliminary tests /without/ docker and am directly using my 
local machine (Node 1, IP 172.24.10.79) and a Raspberry Pi (Node 2, IP 
172.24.10.83), and I can't get the cluster to work, because the discovery 
process doesn't succeed. I'm using a static IP finder in which I point each 
node to its corresponding counterpart.

XML-configuration of both nodes with the aforementioned minimal use of ports is 
attached inline.

If I start node 1 first and then node 2, no discovery process is initiated in 
the first minutes. If I start node 2 first and then node 1, the discovery 
process is initiated but not completed successfully. I'll attach logs for the 
second case for both node 2 and 1.

Can somebody spot my configuration error?

Best regards and TIA,
Tobias



P.S. I was able to reproduce the error on two "regular" machines as well, 
without the use of a Raspberry Pi.


___

# ignite-config-node1.xml

http://www.springframework.org/schema/beans;
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
 xsi:schemaLocation="
 http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>

 
 
 
 
 
 
 
 
127.0.0.1:3013
172.24.10.83:3013
 
 
 
 
 
 
 
 
 
 
 
 



# ignite-config-node2.xml

http://www.springframework.org/schema/beans;
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
 xsi:schemaLocation="
 http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>

 
 
 
 
 
 
 
 
127.0.0.1:3013
172.24.10.79:3013

 
 
 
 
 
 
 
 
 
 
 











Re: Ignite and spark for deep learning

2019-01-09 Thread zaleslaw
Dear Mehdi Sey

Yes, both platforms are used for in-memory computing, but they have
different APIs and history of feature creation and different ways of
integration with famous DL frameworks (like DL4j and TensorFlow).

>From my point of view, you have no speed up in Ignite + Spark + DL4j
integration.

Caching data in Ignite as a backend for RDD and dataframes first of all is
acceleration of business logic based on SQL queries. Not the same for ML
frameworks. 

We have no proof, that usage Ignite as a backend could speed up DL4j or
MLlib algorithms.

Moreover, to avoid this, we wrote own ML library which is more better than
MLlib and runs natively on Ignite.

In my opinon, you should choose Ignite + Ignite ML + TF integration or Spark
+ DL4j to solve your Data Science task (where you need neural networks).





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JDK 11 support

2019-01-09 Thread zaleslaw
I haven't any troubles with running Ignite 2.6 with JDK 8 and Ignite 2.7 with
JDK 8,9.
But a few weeks ago it [Ignite 2.6/Ignite 2.7) doesn't compile with JDK 11
(Oracle).





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/