Re: Affinity Collocation

2016-04-26 Thread Alexey Goncharuk
Hi,

As long as cache configuration is the same, affinity assignment for such
caches will be identical, so you do not need to explicitly specify cache
dependency. On the other hand, if cache configurations do differ, it is not
always possible to collocate keys properly, so for this case such a
dependency also does not seem legit.

Makes sense?
​


Affinity Collocation

2016-04-26 Thread Kamal C
Hi all,

In the example provided for affinity collocation [1], how the keys
 of different caches gets collocate together ?

Say, there are two caches:
1. Person cache
2. Organization cache

While inserting the elements into Person cache, I've to use either the
annotation *AffinityKeyMapped *or* AffinityKey *to collocate the data
with the organization in the same node.

My question: We are not specifying any dependency such that person
cache depends on organization cache. There can be n number of caches
with same keys as organization cache. Then, How it works?

[1] https://apacheignite.readme.io/docs/affinity-collocation

--Kamal


Re: ODBC Driver?

2016-04-26 Thread arthi
Thank you...

will this driver support rowset binding of resultset ?
something like this -https://msdn.microsoft.com/en-us/library/ms403318.aspx

Thanks,
Arthi



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ODBC-Driver-tp4557p4575.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error running nodes in .net and c++

2016-04-26 Thread Murthy Kakarlamudi
Can someone please help how Ignite works for the following use case. The
server node loads data from Persistent Store into cache upon start up.
There will be a couple of client nodes (c++, .net based) that needs to
access the cache.
The server node will have the configuration for cachestore. Should the
client nodes also have the configuration for cachestore? I am hoping no
because all they need is to read the cache.
But I am assuming, if these client nodes can also update the cache then the
cachestore config is required if write through is enabled.
Please validate my assumptions.

Thanks,
Satya...

On Tue, Apr 26, 2016 at 9:44 AM, Murthy Kakarlamudi 
wrote:

> No..I am not. I have different configs for my server node in java vs my
> client node in c++. That was the question I had. In my server node that
> loads the data from persistent store to cache, I configured cachestore. But
> my c++ node is only a client node that needs to access cache. So I was not
> sure if my client node config should have the cachestore details as well.
>
> Let me try the option you suggested.
>
> On Tue, Apr 26, 2016 at 9:40 AM, Vladimir Ozerov 
> wrote:
>
>> HI Murthy,
>>
>> Do you start all nodes with the same XML configuration? Please ensure
>> that this is so, and all nodes know all caches from configuration in
>> advance.
>>
>> Vladimir.
>>
>> On Tue, Apr 26, 2016 at 3:27 PM, Murthy Kakarlamudi 
>> wrote:
>>
>>> Hi Vladimir...I made the update and still running into the same issue.
>>>
>>> Here is the updated spring config for my Java node:
>>> 
>>>
>>> 
>>>
>>> 
>>> http://www.springframework.org/schema/beans;
>>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xmlns:util="
>>> http://www.springframework.org/schema/util;
>>> xsi:schemaLocation="
>>> http://www.springframework.org/schema/beans
>>> http://www.springframework.org/schema/beans/spring-beans.xsd
>>> http://www.springframework.org/schema/util
>>> http://www.springframework.org/schema/util/spring-util-2.5.xsd;>
>>>
>>> >> class="org.springframework.jdbc.datasource.DriverManagerDataSource">
>>> >> value="com.microsoft.sqlserver.jdbc.SQLServerDriver" />
>>> >> value="jdbc:sqlserver://LAPTOP-QIT4AVOG\MSSQLSERVER64;databaseName=PrimeOne;integratedSecurity=true"
>>> />
>>> 
>>>
>>> >> class="org.apache.ignite.configuration.IgniteConfiguration">
>>> 
>>> 
>>> 
>>> 
>>> 
>>>
>>> 
>>> 
>>> 
>>> 
>>>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> >> class="org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory">
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> >>
>>> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>>> 
>>> 
>>> 
>>> 
>>> 127.0.0.1:47500..47509
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>>
>>>
>>> Error:
>>> >>> Cache node started.
>>>
>>> [08:27:25,045][SEVERE][exchange-worker-#38%null%][GridDhtPartitionsExchangeFuture]
>>> Failed to reinitialize local partitions (preloading will be stopped):
>>> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=11,
>>> minorTopVer=1], nodeId=bc7d2aa2, evt=DISCOVERY_CUSTOM_EVT]
>>> class org.apache.ignite.IgniteException: Spring application context
>>> resource is not injected.
>>> at
>>> org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:156)
>>> at
>>> org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:96)
>>> at
>>> org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1243)
>>> at
>>> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1638)
>>> at
>>> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCachesStart(GridCacheProcessor.java:1563)
>>> at
>>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.startCaches(GridDhtPartitionsExchangeFuture.java:956)
>>> at
>>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:523)
>>> at
>>> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1297)
>>> at
>>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>>> at java.lang.Thread.run(Thread.java:745)
>>> [08:27:25,063][SEVERE][exchange-worker-#38%null%][GridCachePartitionExchangeManager]
>>> Failed to wait for completion of partition map exchange (preloading will
>>> not start): 

Re: Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException:

2016-04-26 Thread kcheng.mvp
Yes, you are right, also need another minor change as well.

need to set "peerClassLoadingEnabled" to "true"


By the way, what does it mean for below
explanation(https://apacheignite.readme.io/v1.5/docs/jcache)


> Whenever doing puts and updates in cache, you are usually sending full
> state object state across the network. EntryProcessor allows for
> processing data directly on primary nodes, often transferring only the
> deltas instead of the full state.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Caused-by-class-org-apache-ignite-binary-BinaryInvalidTypeException-tp4311p4572.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error running nodes in .net and c++

2016-04-26 Thread Murthy Kakarlamudi
Hi Vladimir...With the above change, I am running into the following error:

22:03:24,474][ERROR][exchange-worker-#49%null%][GridDhtPartitionsExchangeFuture]
Failed to reinitialize local partitions (preloading will be stopped):
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=1,
minorTopVer=1], nodeId=771a1f0a, evt=DISCOVERY_CUSTOM_EVT]
class org.apache.ignite.IgniteCheckedException: Failed to start component:
class org.apache.ignite.IgniteException: Failed to initialize cache store
(data source is not provided).
at
org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8385)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1269)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1638)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCachesStart(GridCacheProcessor.java:1563)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.startCaches(GridDhtPartitionsExchangeFuture.java:956)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:523)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1297)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Unknown Source)
Caused by: class org.apache.ignite.IgniteException: Failed to initialize
cache store (data source is not provided).
at
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.start(CacheAbstractJdbcStore.java:297)
at
org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8381)
... 8 more
[22:03:24,513][ERROR][exchange-worker-#49%null%][GridCachePartitionExchangeManager]
Failed to wait for completion of partition map exchange (preloading will
not start): GridDhtPartitionsExchangeFuture [dummy=false,
forcePreload=false, reassign=false, discoEvt=DiscoveryCustomEvent
[customMsg=DynamicCacheChangeBatch [reqs=[DynamicCacheChangeRequest
[deploymentId=0eab4755451-5d255cd8-68f9-43a8-833c-c0ad665177ae,
startCfg=CacheConfiguration [name=buCache,
storeConcurrentLoadAllThreshold=5, rebalancePoolSize=2,
rebalanceTimeout=1, evictPlc=null, evictSync=false,
evictKeyBufSize=1024, evictSyncConcurrencyLvl=4, evictSyncTimeout=1,
evictFilter=null, evictMaxOverflowRatio=10.0, eagerTtl=true,
dfltLockTimeout=0, startSize=150, nearCfg=null, writeSync=PRIMARY_SYNC,
storeFactory=CacheJdbcPojoStoreFactory [batchSizw=512, dataSrcBean=null,
dialect=null, maxPoolSize=4, maxWriteAttempts=2,
parallelLoadCacheMinThreshold=512,
hasher=o.a.i.cache.store.jdbc.JdbcTypeDefaultHasher@70925b45,
dataSrc=null], storeKeepBinary=false, loadPrevVal=false,
aff=o.a.i.cache.affinity.rendezvous.RendezvousAffinityFunction@1b5bc39d,
cacheMode=PARTITIONED, atomicityMode=ATOMIC, atomicWriteOrderMode=PRIMARY,
backups=1, invalidate=false, tmLookupClsName=null, rebalanceMode=ASYNC,
rebalanceOrder=0, rebalanceBatchSize=524288,
rebalanceBatchesPrefetchCount=2, offHeapMaxMem=-1, swapEnabled=false,
maxConcurrentAsyncOps=500, writeBehindEnabled=false,
writeBehindFlushSize=10240, writeBehindFlushFreq=5000,
writeBehindFlushThreadCnt=1, writeBehindBatchSize=512,
memMode=ONHEAP_TIERED,
affMapper=o.a.i.i.processors.cache.CacheDefaultBinaryAffinityKeyMapper@1494b84d,
rebalanceDelay=0, rebalanceThrottle=0, interceptor=null,
longQryWarnTimeout=3000, readFromBackup=true,
nodeFilter=o.a.i.configuration.CacheConfiguration$IgniteAllNodesPredicate@1df98368,
sqlSchema=null, sqlEscapeAll=false, sqlOnheapRowCacheSize=10240,
snapshotableIdx=false, cpOnRead=true, topValidator=null], cacheType=USER,
initiatingNodeId=771a1f0a-df78-4872-9914-0e99a56ff562, nearCacheCfg=null,
clientStartOnly=false, stop=false, close=false, failIfExists=false,
template=false, exchangeNeeded=true, cacheFutTopVer=null,
cacheName=buCache]], clientNodes=null,
id=1eab4755451-5d255cd8-68f9-43a8-833c-c0ad665177ae,
clientReconnect=false], affTopVer=AffinityTopologyVersion [topVer=1,
minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=771a1f0a-df78-4872-9914-0e99a56ff562, addrs=[0:0:0:0:0:0:0:1,
127.0.0.1, 192.168.0.5, 2001:0:9d38:6ab8:203a:34b1:3f57:fffa,
2600:8806:0:8d00:0:0:0:1, 2600:8806:0:8d00:3ccf:1e94:1ab4:83a9,
2600:8806:0:8d00:58be:4acc:9730:7a66], sockAddrs=[LAPTOP-QIT4AVOG/
192.168.0.5:47500, /0:0:0:0:0:0:0:1:47500, LAPTOP-QIT4AVOG/192.168.0.5:47500,
/127.0.0.1:47500, LAPTOP-QIT4AVOG/192.168.0.5:47500, /192.168.0.5:47500,
LAPTOP-QIT4AVOG/192.168.0.5:47500,
/2001:0:9d38:6ab8:203a:34b1:3f57:fffa:47500, LAPTOP-QIT4AVOG/
192.168.0.5:47500, /2600:8806:0:8d00:0:0:0:1:47500,
/2600:8806:0:8d00:3ccf:1e94:1ab4:83a9:47500,
/2600:8806:0:8d00:58be:4acc:9730:7a66:47500], discPort=47500, order=1,
intOrder=1, lastExchangeTime=1461722594448, loc=true,

Re: Running gridgain yardstick

2016-04-26 Thread vkulichenko
Hi,

Yes, this is possible. There are two main interfaces: BenchmarkServer and
BenchmarkDriver. First one is optional and can be used to start remote
servers. Driver is the test itself. It will be called by Yardstick in a loop
and measured. So to test two different technologies you should provide two
different implementations of these classes.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Running-gridgain-yardstick-tp4559p4568.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: loadCache takes long time to complete with million rows

2016-04-26 Thread vkulichenko
Hi Arthi,

I see that you're using your implementation of the store, so it's really
hard to say why is it slow. I would recommend to debug the code first and
see what the time is spent on and whether resources are utilized. One of the
first possible optimizations would be to load the data in multithreaded
fashion within CacheStore.loadCache() implementation.

Hi Shaomin,

This depends on number of parameters like the size of value, network, etc.
If you feel that performance in your test could be better, please provide
the code and we will take a look.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/loadCache-takes-long-time-to-complete-with-million-rows-tp4534p4567.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Client Blocks On Ignite Server Restart

2016-04-26 Thread colinc
I'm experiencing something very similar to this. In my case, I have a load
test that is causing transaction contention. I don't see the problem when
transactions are switched off, even at high load. The transactions are
cross-cache if that's relevant at all.

The contention causes (expected) errors like the one below but the cluster
continues to work as normal until (in my case) destroyCache() is called. I'm
doing this in order to test different cache configurations.

At this point, the cluster effectively stops responding. Operations from
client nodes are not serviced - even if new nodes are added to the cluster -
until all the original nodes are killed.

I have been unable to replicate the problem with a simple test - even one
that creates e.g. an OptimisticLockFailureException. It seem to require this
level of contention before the problem occurs.

Failed to execute compound future reducer: Compound future listener []class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=1000,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@49550672]
at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter$PostLockClosure1.apply(IgniteTxLocalAdapter.java:3943)
at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter$PostLockClosure1.apply(IgniteTxLocalAdapter.java:3895)
at
org.apache.ignite.internal.util.future.GridEmbeddedFuture$2.applyx(GridEmbeddedFuture.java:91)

Regards,
Colin.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Client-Blocks-On-Ignite-Server-Restart-tp4554p4564.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ODBC Driver?

2016-04-26 Thread Dmitriy Setrakyan
Let’s hope we can send it out for a vote some time next week.

On Tue, Apr 26, 2016 at 2:43 PM, vkulichenko 
wrote:

> Hi Arthi,
>
> ODBC driver is almost ready and will be available in 1.6. Here is the
> ticket: https://issues.apache.org/jira/browse/IGNITE-1786
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/ODBC-Driver-tp4557p4562.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: asm.writer error

2016-04-26 Thread vkulichenko
Ravi,

You should check what your current version of Hibernate depends on and use
the correct version ASM. If you're using Maven, Hibernate should fetch all
its dependencies automatically.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/asm-writer-error-tp4536p4561.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Automatic persistence ?

2016-04-26 Thread vkulichenko
Hi Ravi,

'Demo' class in schema-import demo has the 'preload' method that does this.
Did you try it?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Automatic-persistence-tp4413p4560.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Running gridgain yardstick

2016-04-26 Thread akritibahal91
Hi,

I wanna how more about the GridGain Yardstick. I tried running it, but since
I'm on a windows environment, I could not find the benchmark-run-all.bat
script in the bin folder.

Could you tell me where I could find it?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Running-gridgain-yardstick-tp4559.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Installation with Spark under CDH

2016-04-26 Thread mdolgonos
Vladimir,
I verified that the cache jar is in the Cloudera jars directory. All the
cache packages are also included in the deployment jar-with-dependencies as
I used

org.apache.ignite
ignite-spark
${ignite.version}
compile


  javax.cache
  cache-api
  1.0.0
  compile

Not sure what else I can take a look at.
Thank you again for your help.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Installation-with-Spark-under-CDH-tp4457p4558.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


ODBC Driver?

2016-04-26 Thread arthi
Hi Team,

Do we have support for a ODBC driver for ignite?
If not in 1.5 version, are there plans to provide one? 

Thanks,
Arthi



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ODBC-Driver-tp4557.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ignite service session issue

2016-04-26 Thread Alexei Scherbakov
Hi,

You can also use *screen *command, which provides more functionality
Start screen session and run ignite.sh in it.
Detach screen session before closing terminal.
After next login reattach screen session
For the additional info refer to documentation: man screen

2016-04-25 3:41 GMT+03:00 kevin.zheng :

> Thank you  for your help!
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/ignite-service-session-issue-tp4475p4485.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Error running nodes in .net and c++

2016-04-26 Thread Vladimir Ozerov
HI Murthy,

Do you start all nodes with the same XML configuration? Please ensure that
this is so, and all nodes know all caches from configuration in advance.

Vladimir.

On Tue, Apr 26, 2016 at 3:27 PM, Murthy Kakarlamudi 
wrote:

> Hi Vladimir...I made the update and still running into the same issue.
>
> Here is the updated spring config for my Java node:
> 
>
> 
>
> 
> http://www.springframework.org/schema/beans;
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xmlns:util="
> http://www.springframework.org/schema/util;
> xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd
> http://www.springframework.org/schema/util
> http://www.springframework.org/schema/util/spring-util-2.5.xsd;>
>
>  class="org.springframework.jdbc.datasource.DriverManagerDataSource">
>  value="com.microsoft.sqlserver.jdbc.SQLServerDriver" />
>  value="jdbc:sqlserver://LAPTOP-QIT4AVOG\MSSQLSERVER64;databaseName=PrimeOne;integratedSecurity=true"
> />
> 
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
> 
> 
>
> 
> 
> 
> 
>
> 
> 
> 
> 
> 
>  class="org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
> 
> 
> 
> 
> 
> 
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
> 
> 
> 
> 127.0.0.1:47500..47509
> 
> 
> 
> 
> 
> 
> 
> 
>
>
> Error:
> >>> Cache node started.
>
> [08:27:25,045][SEVERE][exchange-worker-#38%null%][GridDhtPartitionsExchangeFuture]
> Failed to reinitialize local partitions (preloading will be stopped):
> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=11,
> minorTopVer=1], nodeId=bc7d2aa2, evt=DISCOVERY_CUSTOM_EVT]
> class org.apache.ignite.IgniteException: Spring application context
> resource is not injected.
> at
> org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:156)
> at
> org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:96)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1243)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1638)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCachesStart(GridCacheProcessor.java:1563)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.startCaches(GridDhtPartitionsExchangeFuture.java:956)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:523)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1297)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> [08:27:25,063][SEVERE][exchange-worker-#38%null%][GridCachePartitionExchangeManager]
> Failed to wait for completion of partition map exchange (preloading will
> not start): GridDhtPartitionsExchangeFuture [dummy=false,
> forcePreload=false, reassign=false, discoEvt=DiscoveryCustomEvent
> [customMsg=DynamicCacheChangeBatch [reqs=[DynamicCacheChangeRequest
> [deploymentId=8ea535e3451-d29afc27-9b4b-4125-bbf2-232c08daa0cb,
> startCfg=CacheConfiguration [name=buCache,
> storeConcurrentLoadAllThreshold=5, rebalancePoolSize=2,
> rebalanceTimeout=1, evictPlc=null, evictSync=false,
> evictKeyBufSize=1024, evictSyncConcurrencyLvl=4, evictSyncTimeout=1,
> evictFilter=null, evictMaxOverflowRatio=10.0, eagerTtl=true,
> dfltLockTimeout=0, startSize=150, nearCfg=null, writeSync=PRIMARY_SYNC,
> storeFactory=CacheJdbcPojoStoreFactory [batchSizw=512,
> dataSrcBean=myDataSource, dialect=null, maxPoolSize=4, maxWriteAttempts=2,
> parallelLoadCacheMinThreshold=512,
> hasher=o.a.i.cache.store.jdbc.JdbcTypeDefaultHasher@78d010a2,
> dataSrc=null], storeKeepBinary=false, loadPrevVal=false,
> aff=o.a.i.cache.affinity.rendezvous.RendezvousAffinityFunction@76311661,
> cacheMode=PARTITIONED, atomicityMode=ATOMIC, atomicWriteOrderMode=PRIMARY,
> backups=1, invalidate=false, tmLookupClsName=null, rebalanceMode=ASYNC,
> rebalanceOrder=0, rebalanceBatchSize=524288,
> rebalanceBatchesPrefetchCount=2, offHeapMaxMem=-1, swapEnabled=false,
> maxConcurrentAsyncOps=500, writeBehindEnabled=false,
> writeBehindFlushSize=10240, writeBehindFlushFreq=5000,
> writeBehindFlushThreadCnt=1, writeBehindBatchSize=512,
> memMode=ONHEAP_TIERED,
> affMapper=o.a.i.i.processors.cache.CacheDefaultBinaryAffinityKeyMapper@2e41d426,
> 

Re: Detecting a node leaving the cluster

2016-04-26 Thread Alexei Scherbakov
Hi,

Why not use cluster singleton service?
You should establish your connections in the service init method call.
Please check [1] for details
Does it work for you?

[1] https://apacheignite.readme.io/docs/cluster-singletons#cluster-singleton

2016-04-26 14:44 GMT+03:00 Vladimir Ozerov :

> Hi Ralph,
>
> Yes, this is how we normally respond to node failures - by listening
> events. However, please note that you should not perform heavy and blocking
> operations in the callback as it might have adverse effects on nodes
> communication. Instead, it is better to move heavy operations into separate
> thread or thread pool.
>
> Vladimir.
>
> On Mon, Apr 25, 2016 at 2:54 PM, Ralph Goers 
> wrote:
>
>> Great, thanks!
>>
>> Is listening for that the way you would implement what I am trying to do?
>>
>> Ralph
>>
>> On Apr 25, 2016, at 4:22 AM, Vladimir Ozerov 
>> wrote:
>>
>> Ralph,
>>
>> EVT_NODE_LEFT and EVT_NODE_FAILED occur on local node. They essentially
>> mean "I saw that remote node went down".
>>
>> Vladimir.
>>
>> On Sat, Apr 23, 2016 at 5:48 PM, Ralph Goers 
>> wrote:
>>
>>> Some more information that may be of help.
>>>
>>> Each user of a client application creates a “session” that is
>>> represented in the distributed cache. Each session has its own connection
>>> to the third party application. If a user uses multiple client applications
>>> they will reuse the same session and connection with the third party
>>> application. So when a single node goes down all the user’s sessions need
>>> to become “owned” by different nodes.
>>>
>>> In the javadoc I do see IgniteEvents.localListen(), but the description
>>> says it listens for “local” events. I wouldn’t expect EVT_NODE_LEFT or
>>> EVT_NODE_FAILED to be considered local events, so I am a bit confused as to
>>> what the method does.
>>>
>>> Ralph
>>>
>>> On Apr 23, 2016, at 6:49 AM, Ralph Goers 
>>> wrote:
>>>
>>> From what I understand in the documentation client mode will mean I will
>>> lose high availability, which is the point of using a distributed cache.
>>>
>>> The architecture is such that we have multiple client applications that
>>> need to communicate with the service that has the clustered cache. The
>>> client applications expect to get callbacks when events occur in the third
>>> party application the service is communicating with. If one of the service
>>> nodes fail - for example during a rolling deployment - we need one of the
>>> other nodes to re-establish the connection with the third party so it can
>>> continue to monitor for the events. Note that the service servers are
>>> load-balanced so they may each have an arbitrary number of connections with
>>> the third party.
>>>
>>> So I either need a listener that tells me when one of the nodes in the
>>> cluster has left or a way of creating the connection using something ignite
>>> provides so that it automatically causes the connection to be recreated
>>> when a node leaves.
>>>
>>> Ralph
>>>
>>>
>>> On Apr 23, 2016, at 12:01 AM, Владислав Пятков 
>>> wrote:
>>>
>>> Hello Ralph,
>>>
>>> I think the correct way is to use client node (with setClientMode -
>>> true) for control of cluster. Client node is isolated from data processing
>>> and not subject fail of load.
>>> Why are you connect each node with third party application instead of to
>>> do that only from client?
>>>
>>> On Sat, Apr 23, 2016 at 4:10 AM, Ralph Goers >> > wrote:
>>>
 I have an application that is using Ignite for a clustered cache.  Each
 member of the cache will have connections open with a third party
 application. When a cluster member stops its connections must be
 re-established on other cluster members.

 I can do this manually if I have a way of detecting a node has left the
 cluster, but I am hoping that there is some other recommended way of
 handling this.

 Any suggestions?

 Ralph

>>>
>>>
>>>
>>> --
>>> Vladislav Pyatkov
>>>
>>>
>>>
>>>
>>
>


-- 

Best regards,
Alexei Scherbakov


Re: Ignite cache data size problem.

2016-04-26 Thread Vladimir Ozerov
Hi Kevin,

Yes, log files are created under this directory by default.

Vladimir.

On Tue, Apr 26, 2016 at 3:18 PM, Zhengqingzheng 
wrote:

> Hi Vladimir,
>
> No problem.
>
> I will re-run the loading process and give you  the log file.
>
> To be clear, when you say log file, do you mean files located at
> work/log/*?
>
> I use IgntieCache.loadCache(null, “select * from table”) to load all the
> data.
>
>
>
> Best regards,
>
> Kevin
>
>
>
> *发件人:* Vladimir Ozerov [mailto:voze...@gridgain.com]
> *发送时间:* 2016年4月26日 20:15
> *收件人:* user@ignite.apache.org
> *主题:* Re: Ignite cache data size problem.
>
>
>
> Hi Kevin,
>
>
>
> Could you please re-run your case and attach Ignite logs and GC logs from
> all participating servers? I would expect that this is either a kind of
> out-of-memory problem, or network saturation. Also please explain how
> exactly do you load data to Ignite? Do you use *DataStreamer*, or may be
> *IgniteCache.loadlCache*, or *IgniteCache.put*?
>
>
>
> Vladimir.
>
>
>
> On Tue, Apr 26, 2016 at 3:01 PM, Zhengqingzheng 
> wrote:
>
> Hi Vladimir,
>
> Thank you for your help.
>
> I tried to load 1Million records, and calculate each object’s size
> (including key and value object, where key is  string type), summarized
> together and found the total memory consumption is 130Mb.
>
>
>
> Because ./ignitevisor.sh  command only shows number of records, and no
> data allocation information, I don’t know how much memory has been consumed
> for each type of cache.
>
>
>
> My result is as follows:
>
> Log:
>
> Found big object: [Ljava.util.HashMap$Node;@24833164 size:
> 30.888206481933594Mb
>
> Found big object:   java.util.HashMap@15819383 size: 30.88824462890625Mb
>
> Found big object: java.util.HashSet@10236420 size: 30.888259887695312Mb
>
> key size: 32388688 human readable data: 30.888259887695312Mb
>
> Found big object:   [Lorg.jsr166.ConcurrentHashMap8$Node;@29556439 size:
> 129.99818420410156Mb
>
> Found big object: org.jsr166.ConcurrentHashMap8@19238297 size:
> *129.99822235107422Mb*
>
> value size: 136313016 human readable data: 129.99822235107422Mb
>
> *the whole number of record is 47Million, so the  data size inside the
> cache should be 130*47=**6110Mb(around 6Gb).*
>
>
>
> However, when I try to load the whole data into the cache, I still get
> exceptions:
>
> The exception information is listed as follows:
>
> 1.   -- exception info from client
> 
>
> Before exception occurred, I have 10 nodes on two servers,  server1 (48G
> ram) has 6nodes, each node is assigned with 7gb jvm heap; server2 (32gb
> ram)  has 4 nodes with the same jvm settings as previous one.
>
> After exception, client stopped, and 8 node left (server1’s node remain,
> no exception occurred on this server), server2 ( 2 nodes remain, two nodes
> droped)
>
> The total number of records loaded is 37Million.
>
>
>
> [19:01:47] Topology snapshot [ver=77, servers=9, clients=1, CPUs=20,
> heap=64.0GB]
>
> [19:01:47,463][SEVERE][pub-#46%null%][GridTaskWorker] Failed to obtain
> remote job result policy for result from ComputeTask.result(..) method
> (will fail the whole task): GridJobResultImpl [job=C2 [],
> sib=GridJobSiblingImpl
> [sesId=0dbc0f15451-880a2dd1-bc95-4084-a705-4effcec5d2cd,
> jobId=6dbc0f15451-832bed3e-dc5d-4743-9853-127e3b516924,
> nodeId=832bed3e-dc5d-4743-9853-127e3b516924, isJobDone=false],
> jobCtx=GridJobContextImpl
> [jobId=6dbc0f15451-832bed3e-dc5d-4743-9853-127e3b516924, timeoutObj=null,
> attrs={}], node=TcpDiscoveryNode [id=832bed3e-dc5d-4743-9853-127e3b516924,
> addrs=[0:0:0:0:0:0:0:1%lo, 10.120.70.122, 127.0.0.1],
> sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, /0:0:0:0:0:0:0:1%lo:47500, /
> 10.120.70.122:47500, /127.0.0.1:47500], discPort=47500, order=7,
> intOrder=7, lastExchangeTime=1461663619161, loc=false,
> ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false], ex=class
> o.a.i.cluster.ClusterTopologyException: Node has left grid:
> 832bed3e-dc5d-4743-9853-127e3b516924, hasRes=true, isCancelled=false,
> isOccupied=true]
>
> class org.apache.ignite.cluster.ClusterTopologyException: Node has left
> grid: 832bed3e-dc5d-4743-9853-127e3b516924
>
>  at
> org.apache.ignite.internal.processors.task.GridTaskWorker.onNodeLeft(GridTaskWorker.java:1315)
>
>  at
> org.apache.ignite.internal.processors.task.GridTaskProcessor$TaskDiscoveryListener$1.run(GridTaskProcessor.java:1246)
>
>  at
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6453)
>
>  at
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$1.body(GridClosureProcessor.java:788)
>
>  at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>
>  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
>  at
> 

Re: Ignite cache data size problem.

2016-04-26 Thread Vladimir Ozerov
Hi Kevin,

Could you please re-run your case and attach Ignite logs and GC logs from
all participating servers? I would expect that this is either a kind of
out-of-memory problem, or network saturation. Also please explain how
exactly do you load data to Ignite? Do you use *DataStreamer*, or may be
*IgniteCache.loadlCache*, or *IgniteCache.put*?

Vladimir.

On Tue, Apr 26, 2016 at 3:01 PM, Zhengqingzheng 
wrote:

> Hi Vladimir,
>
> Thank you for your help.
>
> I tried to load 1Million records, and calculate each object’s size
> (including key and value object, where key is  string type), summarized
> together and found the total memory consumption is 130Mb.
>
>
>
> Because ./ignitevisor.sh  command only shows number of records, and no
> data allocation information, I don’t know how much memory has been consumed
> for each type of cache.
>
>
>
> My result is as follows:
>
> Log:
>
> Found big object: [Ljava.util.HashMap$Node;@24833164 size:
> 30.888206481933594Mb
>
> Found big object:   java.util.HashMap@15819383 size: 30.88824462890625Mb
>
> Found big object: java.util.HashSet@10236420 size: 30.888259887695312Mb
>
> key size: 32388688 human readable data: 30.888259887695312Mb
>
> Found big object:   [Lorg.jsr166.ConcurrentHashMap8$Node;@29556439 size:
> 129.99818420410156Mb
>
> Found big object: org.jsr166.ConcurrentHashMap8@19238297 size:
> *129.99822235107422Mb*
>
> value size: 136313016 human readable data: 129.99822235107422Mb
>
> *the whole number of record is 47Million, so the  data size inside the
> cache should be 130*47=**6110Mb(around 6Gb).*
>
>
>
> However, when I try to load the whole data into the cache, I still get
> exceptions:
>
> The exception information is listed as follows:
>
> 1.   -- exception info from client
> 
>
> Before exception occurred, I have 10 nodes on two servers,  server1 (48G
> ram) has 6nodes, each node is assigned with 7gb jvm heap; server2 (32gb
> ram)  has 4 nodes with the same jvm settings as previous one.
>
> After exception, client stopped, and 8 node left (server1’s node remain,
> no exception occurred on this server), server2 ( 2 nodes remain, two nodes
> droped)
>
> The total number of records loaded is 37Million.
>
>
>
> [19:01:47] Topology snapshot [ver=77, servers=9, clients=1, CPUs=20,
> heap=64.0GB]
>
> [19:01:47,463][SEVERE][pub-#46%null%][GridTaskWorker] Failed to obtain
> remote job result policy for result from ComputeTask.result(..) method
> (will fail the whole task): GridJobResultImpl [job=C2 [],
> sib=GridJobSiblingImpl
> [sesId=0dbc0f15451-880a2dd1-bc95-4084-a705-4effcec5d2cd,
> jobId=6dbc0f15451-832bed3e-dc5d-4743-9853-127e3b516924,
> nodeId=832bed3e-dc5d-4743-9853-127e3b516924, isJobDone=false],
> jobCtx=GridJobContextImpl
> [jobId=6dbc0f15451-832bed3e-dc5d-4743-9853-127e3b516924, timeoutObj=null,
> attrs={}], node=TcpDiscoveryNode [id=832bed3e-dc5d-4743-9853-127e3b516924,
> addrs=[0:0:0:0:0:0:0:1%lo, 10.120.70.122, 127.0.0.1],
> sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, /0:0:0:0:0:0:0:1%lo:47500, /
> 10.120.70.122:47500, /127.0.0.1:47500], discPort=47500, order=7,
> intOrder=7, lastExchangeTime=1461663619161, loc=false,
> ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false], ex=class
> o.a.i.cluster.ClusterTopologyException: Node has left grid:
> 832bed3e-dc5d-4743-9853-127e3b516924, hasRes=true, isCancelled=false,
> isOccupied=true]
>
> class org.apache.ignite.cluster.ClusterTopologyException: Node has left
> grid: 832bed3e-dc5d-4743-9853-127e3b516924
>
>  at
> org.apache.ignite.internal.processors.task.GridTaskWorker.onNodeLeft(GridTaskWorker.java:1315)
>
>  at
> org.apache.ignite.internal.processors.task.GridTaskProcessor$TaskDiscoveryListener$1.run(GridTaskProcessor.java:1246)
>
>  at
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6453)
>
>  at
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$1.body(GridClosureProcessor.java:788)
>
>  at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>
>  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
>  at java.lang.Thread.run(Thread.java:745)
>
> [19:02:16] Ignite node stopped OK [uptime=01:21:52:963]
>
>
>
> 2 -- also, I use the recommended settings for
> the jvm and get the following log information
> 
>
> Java HotSpot(TM) Client VM (25.66-b18) for windows-x86 JRE (1.8.0_66-b18),
> built on Nov  9 2015 10:58:29 by "java_re" with MS VC++ 10.0 (VS2010)
>
> Memory: 4k page, physical 6291064k(1912832k free), swap 12580288k(6996288k
> free)
>
> CommandLine flags: -XX:GCLogFileSize=104857600
> -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824
> 

Re: loadCache takes long time to complete with million rows

2016-04-26 Thread arthi

hi Val,

There is enough heap available. I initiated the process using 10g and the
utilization is below 5g.
profile.PNG
  

Thanks,
Arthi



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/loadCache-takes-long-time-to-complete-with-million-rows-tp4534p4543.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error running nodes in .net and c++

2016-04-26 Thread Vladimir Ozerov
Hi Murthy,

Seems that you faced a kind of usability issue, which happens only in some
specific cases. Please try replacing the following line in your config:



with this:



It should help.

Vladimir.

On Tue, Apr 26, 2016 at 1:36 AM, Murthy Kakarlamudi 
wrote:

> Hi Alexey...Apologize the delay in my response. Below are the 2 links from
> gdrive for my Java and c++ projects.
>
> Java Project:
> https://drive.google.com/open?id=0B8lM91-_3MwRZmF6N0tnN1pyN2M
>
> C++ Project:
> https://drive.google.com/open?id=0B8lM91-_3MwRMGE5akVWVXc0RXc
>
> Please let me know if you have any difficulty downloading the projects.
>
> Thanks,
> Satya.
>
> On Mon, Apr 25, 2016 at 10:49 AM, Alexey Kuznetsov <
> akuznet...@gridgain.com> wrote:
>
>> I see in stack trace "Caused by: class org.apache.ignite.IgniteException:
>> Spring application context resource is not injected."
>>
>> Also CacheJdbcPojoStoreFactory contains such declaration:
>> @SpringApplicationContextResource
>> private transient Object appCtx;
>>
>> Anybody know why appCtx may not be injected?
>>
>> Also Satya, it is possible for you to prepare small reproducible example
>> that we could debug?
>>
>>
>> On Mon, Apr 25, 2016 at 9:39 PM, Vladimir Ozerov 
>> wrote:
>>
>>> Alexey Kuznetsov,
>>>
>>> Provided you have more expertise with POJO store, could you please
>>> advise what could cause this exception? Seems that POJO store expects some
>>> injection, which doesn't happen.
>>> Are there any specific requirements here? C++ node starts as a regular
>>> node and also use Spring.
>>>
>>> Vladimir.
>>>
>>> On Mon, Apr 25, 2016 at 5:32 PM, Murthy Kakarlamudi 
>>> wrote:
>>>
 Any help on this issue please...

 On Sat, Apr 16, 2016 at 7:29 PM, Murthy Kakarlamudi 
 wrote:

> Hi,
>In my use case, I am starting a node from .net which loads data
> from SQL Server table into cache upon start up. I have to read those
> entries from cache from a c++ node that acts as a client. I am getting the
> below error trying to start the node from c++.
>
> [19:08:57] Security status [authentication=off, tls/ssl=off]
> [19:08:58,163][SEVERE][main][IgniteKernal] Failed to start manager:
> GridManagerAdapter [enabled=true,
> name=o.a.i.i.managers.discovery.GridDiscoveryManager]
> class org.apache.ignite.IgniteCheckedException: Remote node has peer
> class loading enabled flag different from local [locId8=f02445af,
> locPeerClassLoading=true, rmtId8=8e52f9c9, rmtPeerClassLoading=false,
> rmtAddrs=[LAPTOP-QIT4AVOG/0:0:0:0:0:0:0:1, LAPTOP-QIT4AVOG/127.0.0.1,
> LAPTOP-QIT4AVOG/192.168.0.5,
> LAPTOP-QIT4AVOG/2001:0:9d38:90d7:145b:5bf:bb9b:11d9,
> LAPTOP-QIT4AVOG/2600:8806:0:8d00:0:0:0:1,
> /2600:8806:0:8d00:3ccf:1e94:1ab4:83a9,
> /2600:8806:0:8d00:f114:bf30:2068:352d]]
> at
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.checkAttributes(GridDiscoveryManager.java:1027)
> at
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:680)
> at
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1505)
> at
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:917)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1688)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1547)
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1003)
> at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:534)
> at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:515)
> at org.apache.ignite.Ignition.start(Ignition.java:322)
> at
> org.apache.ignite.internal.processors.platform.PlatformAbstractBootstrap.start(PlatformAbstractBootstrap.java
>
> Below if my config for .net node:
> 
>
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd;>
>    class="org.apache.ignite.configuration.IgniteConfiguration">
> 
>    class="org.apache.ignite.configuration.ConnectorConfiguration">
> 
>   
> 
>
> 
>   
>  class="org.apache.ignite.configuration.CacheConfiguration">
>   
>   
>   
>   
>   
>   
>  class="org.apache.ignite.platform.dotnet.PlatformDotNetCacheStoreFactory">
>    value="TestIgniteDAL.SQLServerStore, TestIgniteDAL"/>
>   

Re: Ignite Installation with Spark under CDH

2016-04-26 Thread Vladimir Ozerov
Hi,

This is pretty hard to say what is the root cause, especially in complex
deployments like CDH. Most probably you JAR is packaged incorrectly because
your application is able to load Ignite classes, but cannot load jcache
API.
Could you try to simply put cache-api-1.0.0.jar to all places and scripts
where you added Igntte JARs and/or your jar-with-dependencies?

Vladimir.

On Tue, Apr 26, 2016 at 12:43 AM, mdolgonos 
wrote:

> Vladimir,
> There are 2 things that I'm experiencing so far:
> 1. I have added the following code to spark-env.sh in my CDH installation
> IGNITE_HOME=/etc/ignite-1.5.0
> IGNITE_LIBS="${IGNITE_HOME}/libs/*"
>
> for file in ${IGNITE_HOME}/libs/*
> do
> if [ -d ${file} ] && [ "${file}" != "${IGNITE_HOME}"/libs/optional ];
> then
> IGNITE_LIBS=${IGNITE_LIBS}:${file}/*
> fi
> done
>
> but Ignite jars are still not recognized by Spark after restarting Spark as
> well as the entire CDH
> . My location of CDH is the default one:
> /opt/cloudera/parcels/CDH-5.5.2-1.cdh5.5.2.p0.4/etc/spark/conf.dist
> So I decided to compile my code into a jar-with-dependencies which led me
> to
> the second issue:
>
> 2. Looks like the jars were discovered by spark-submit, however, now I'm
> getting the following exception:
> Exception in thread "main" java.lang.NoClassDefFoundError:
> javax/cache/configuration/MutableConfiguration
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
> at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.marshallerSystemCache(IgnitionEx.java:2098)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.initializeDefaultCacheConfiguration(IgnitionEx.java:1914)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.initializeConfiguration(IgnitionEx.java:1899)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1573)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1547)
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1003)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:534)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:515)
> at org.apache.ignite.Ignition.start(Ignition.java:322)
> at
> org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:153)
> at
> org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:62)
> at
> training.PnLMergerVectorIgnite$.main(PnLMergerVectorIgnite.scala:50)
> at training.PnLMergerVectorIgnite.main(PnLMergerVectorIgnite.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
>
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
> at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
> at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.ClassNotFoundException:
> javax.cache.configuration.MutableConfiguration
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> I know that the class javax.cache.configuration.MutableConfiguration
> belongs
> to cache-api-1.0.0.jar and I see it present in my jar-with-dependencies
>
> Thank you for 

Re: Detecting a node leaving the cluster

2016-04-26 Thread Vladimir Ozerov
Hi Ralph,

Yes, this is how we normally respond to node failures - by listening
events. However, please note that you should not perform heavy and blocking
operations in the callback as it might have adverse effects on nodes
communication. Instead, it is better to move heavy operations into separate
thread or thread pool.

Vladimir.

On Mon, Apr 25, 2016 at 2:54 PM, Ralph Goers 
wrote:

> Great, thanks!
>
> Is listening for that the way you would implement what I am trying to do?
>
> Ralph
>
> On Apr 25, 2016, at 4:22 AM, Vladimir Ozerov  wrote:
>
> Ralph,
>
> EVT_NODE_LEFT and EVT_NODE_FAILED occur on local node. They essentially
> mean "I saw that remote node went down".
>
> Vladimir.
>
> On Sat, Apr 23, 2016 at 5:48 PM, Ralph Goers 
> wrote:
>
>> Some more information that may be of help.
>>
>> Each user of a client application creates a “session” that is represented
>> in the distributed cache. Each session has its own connection to the third
>> party application. If a user uses multiple client applications they will
>> reuse the same session and connection with the third party application. So
>> when a single node goes down all the user’s sessions need to become “owned”
>> by different nodes.
>>
>> In the javadoc I do see IgniteEvents.localListen(), but the description
>> says it listens for “local” events. I wouldn’t expect EVT_NODE_LEFT or
>> EVT_NODE_FAILED to be considered local events, so I am a bit confused as to
>> what the method does.
>>
>> Ralph
>>
>> On Apr 23, 2016, at 6:49 AM, Ralph Goers 
>> wrote:
>>
>> From what I understand in the documentation client mode will mean I will
>> lose high availability, which is the point of using a distributed cache.
>>
>> The architecture is such that we have multiple client applications that
>> need to communicate with the service that has the clustered cache. The
>> client applications expect to get callbacks when events occur in the third
>> party application the service is communicating with. If one of the service
>> nodes fail - for example during a rolling deployment - we need one of the
>> other nodes to re-establish the connection with the third party so it can
>> continue to monitor for the events. Note that the service servers are
>> load-balanced so they may each have an arbitrary number of connections with
>> the third party.
>>
>> So I either need a listener that tells me when one of the nodes in the
>> cluster has left or a way of creating the connection using something ignite
>> provides so that it automatically causes the connection to be recreated
>> when a node leaves.
>>
>> Ralph
>>
>>
>> On Apr 23, 2016, at 12:01 AM, Владислав Пятков 
>> wrote:
>>
>> Hello Ralph,
>>
>> I think the correct way is to use client node (with setClientMode - true)
>> for control of cluster. Client node is isolated from data processing and
>> not subject fail of load.
>> Why are you connect each node with third party application instead of to
>> do that only from client?
>>
>> On Sat, Apr 23, 2016 at 4:10 AM, Ralph Goers 
>> wrote:
>>
>>> I have an application that is using Ignite for a clustered cache.  Each
>>> member of the cache will have connections open with a third party
>>> application. When a cluster member stops its connections must be
>>> re-established on other cluster members.
>>>
>>> I can do this manually if I have a way of detecting a node has left the
>>> cluster, but I am hoping that there is some other recommended way of
>>> handling this.
>>>
>>> Any suggestions?
>>>
>>> Ralph
>>>
>>
>>
>>
>> --
>> Vladislav Pyatkov
>>
>>
>>
>>
>


asm.writer error

2016-04-26 Thread Ravi Puri
error at cache.loadCache(null,100_00);
please provide the reason as i tried with asm-all 4.2 jar and even with
spring.jar and also asm-2.2.3 jar & asm-1.5.3 jar


java.lang.NoSuchMethodError: org.objectweb.asm.ClassWriter.(Z)V
at
net.sf.cglib.core.DebuggingClassWriter.(DebuggingClassWriter.java:47)
at
net.sf.cglib.core.DefaultGeneratorStrategy.getClassWriter(DefaultGeneratorStrategy.java:30)
at
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:24)
at
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:145)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:117)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at
org.hibernate.proxy.pojo.cglib.CGLIBLazyInitializer.getProxyFactory(CGLIBLazyInitializer.java:117)
at
org.hibernate.proxy.pojo.cglib.CGLIBProxyFactory.postInstantiate(CGLIBProxyFactory.java:43)
at
org.hibernate.tuple.entity.PojoEntityTuplizer.buildProxyFactory(PojoEntityTuplizer.java:162)
at
org.hibernate.tuple.entity.AbstractEntityTuplizer.(AbstractEntityTuplizer.java:135)
at
org.hibernate.tuple.entity.PojoEntityTuplizer.(PojoEntityTuplizer.java:55)
at
org.hibernate.tuple.entity.EntityEntityModeToTuplizerMapping.(EntityEntityModeToTuplizerMapping.java:56)
at
org.hibernate.tuple.entity.EntityMetamodel.(EntityMetamodel.java:295)
at
org.hibernate.persister.entity.AbstractEntityPersister.(AbstractEntityPersister.java:434)
at
org.hibernate.persister.entity.SingleTableEntityPersister.(SingleTableEntityPersister.java:109)
at
org.hibernate.persister.PersisterFactory.createClassPersister(PersisterFactory.java:55)
at
org.hibernate.impl.SessionFactoryImpl.(SessionFactoryImpl.java:226)
at
org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1294)
at
org.apache.ignite.cache.store.hibernate.CacheHibernateStoreSessionListener.start(CacheHibernateStoreSessionListener.java:151)
at
org.apache.ignite.internal.processors.cache.GridCacheUtils.startStoreSessionListeners(GridCacheUtils.java:1728)
at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.start0(GridCacheStoreManagerAdapter.java:213)
at
org.apache.ignite.internal.processors.cache.store.CacheOsStoreManager.start0(CacheOsStoreManager.java:64)
at
org.apache.ignite.internal.processors.cache.GridCacheManagerAdapter.start(GridCacheManagerAdapter.java:50)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCache(GridCacheProcessor.java:1034)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1630)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCachesStart(GridCacheProcessor.java:1545)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.startCaches(GridDhtPartitionsExchangeFuture.java:944)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:511)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1297)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Unknown Source)
[15:32:52,204][ERROR][exchange-worker-#49%null%][GridCachePartitionExchangeManager]
Runtime error caught during grid runnable execution: GridWorker
[name=partition-exchanger, gridName=null, finished=false, isCancelled=false,
hashCode=1289862143, interrupted=false, runner=exchange-worker-#49%null%]
java.lang.NullPointerException
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onExchangeDone(GridCacheProcessor.java:1705)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onDone(GridDhtPartitionsExchangeFuture.java:1098)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onDone(GridDhtPartitionsExchangeFuture.java:87)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:334)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:861)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1297)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)

loadCache takes long time to complete with million rows

2016-04-26 Thread arthi
Hi Team,

I am loading a partitioned cache with 30Million rows using loadCache API
from a persistence store.
The data gets loaded into the cache, but the process takes a long time to
complete.

Here is the config - 







   









  
  
  

  







 


















   

   


 



   






sid_mah_id

category 


sid_per_id   



















can you please guide to see what the issue is?
The same loadCache API can work faster for smaller data sets.

Thanks,
Arthi



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/loadCache-takes-long-time-to-complete-with-million-rows-tp4534.html
Sent 

Re: Azure Integration

2016-04-26 Thread arthi
Thank you,
Arthi



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Azure-Integration-tp4531p4533.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Azure Integration

2016-04-26 Thread Dmitriy Setrakyan
On Mon, Apr 25, 2016 at 10:41 PM, arthi 
wrote:

>
> Can a Ignite grid run on Azure cloud?
>

Yes, you can run Ignite on any cloud. The only requirement that TCP/IP is
supported. You will have to start 1 or 2 nodes first and specify their IP
addresses in the discovery configuration.

More information here:
https://apacheignite.readme.io/docs/cluster-config#static-ip-based-discovery


>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Azure-Integration-tp4531.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>