Re: ClassNotFoundException using peer class loading on cluster

2020-10-08 Thread Vladislav Pyatkov
Hi Alan.

Could you provide a class (IgniteRunnable and may other classes which
was laded through p2p with them) which has this unusual behavior?
And please, attache a full stack trace of the exception.

On Thu, Oct 8, 2020 at 5:19 PM Alan Ward  wrote:

> I'm using peer class loading on a 5 node ignite cluster, persistence
> enabled, Ignite version 2.8.1. I have a custom class that implements
> IgniteRunnable and I launch that class on the cluster. This works fine when
> deploying to an ignite node running on a single node cluster locally, but
> fails with a ClassNotFound exception (on my custom IgniteRunnable class) on
> the 5 node cluster. I can see a reference to this class name in both the
> work-dir/marshaller and work-dir/binary_meta directories on each cluster
> node, so it seems like the class should be there.
>
> I have many other IgniteRunnables and distributed closures that all work
> fine -- this is the only one giving me trouble.I tried renaming the class,
> but that didn't help either.
>
> After nearly three days, I'm running out of ideas (other than giving up
> and statically deploying the jar to each node, which I really want to
> avoid), and I'm looking for advice on how to troubleshoot an issue like
> this.
>
> Thanks for your help,
>
> Alan
>
>
>

-- 
Vladislav Pyatkov


Re: Read IOPS are higher than write when all we are doing is write

2020-09-05 Thread Vladislav Pyatkov
Yes, it can be. More over, indexes pages also will be able to be evicted
and after it will be required to return them for updating when insert a new
entry.

On Sat, Sep 5, 2020 at 6:39 PM krkumar24061...@gmail.com <
krkumar24061...@gmail.com> wrote:

> Hi Vlad - Thanks for the response.
>
> In this test, we write once and no reads/updates throughout the process.
> Would that still result into reading of pages. Also, are you telling that
> Ignite writes/evicts half filled pages to disk and read them back later
> when
> you have to append a key/value to the page??
>
> Thanx and Regards,
> KR Kumar
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Vladislav Pyatkov


Re: [WARN] Failed to read magic header log

2020-09-01 Thread Vladislav Pyatkov
Hi,

Can you provide a log from another node, that started on 42.1.188.128?
My thought that node configured differently, may be SSL configured.

On Tue, Sep 1, 2020 at 10:27 AM kay  wrote:

> Hello, I'm waiting for reply..
>
> rmtAddr port is always change..
>
> Is it normal or I don't have to care log msg??
> these log shows every 3~5 seconds.
>
> Thank u so much.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Vladislav Pyatkov


Re: question about collation with an ignite set

2020-08-02 Thread Vladislav Pyatkov
Scott,

In my thought it has to depend of a partition lose policy on cluster[1].
Because a contains of distributed collection stored in system Ignite cache,
that obeys rules of all cluster caches.

[1]: https://apacheignite.readme.io/docs/partition-loss-policies

On Mon, Aug 3, 2020 at 8:35 AM scottmf  wrote:

> hi, If I setup my *Ignite Set* as specified below what will happen when a
> node leaves the cluster topology forever? Will I simply lose the elements
> which are stored locally - via collation = true - or will I run into
> problems?
>
> Overall I want the all cluster nodes to be aware of all elements in the
> distributed set. If a node leaves the topology i want the elements
> associated with said node to simply disappear from the set.
>
> Will this configuration achieve that functionality?
>
> CollectionConfiguration cacheConfiguration = new 
> CollectionConfiguration();
> cacheConfiguration.setCollocated(true);
> cacheConfiguration.setBackups(0);
> cacheConfiguration.setGroupName("grp");
> this.countQueriesSet = ignite.set("myset", cacheConfiguration);
>
>
> --
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>


-- 
Vladislav Pyatkov


Re: How do I know the cache rebalance is finished?

2020-07-10 Thread Vladislav Pyatkov
Hi,
I think it is not a priority issue, because in general it is right. 
EVT_CACHE_REBALANCE_STOPPED event is received when all data loaded to a node, 
but switch of affinity happens after all cache will be rebalanced.
At first, why do you need to know, when affinity change after rebalance? In my 
point of view, rebalance is a process which not influence on user load.
Another point, you can wait all caches that are rebalancing and be sure all 
data was transferred.

In log you can see messages:

Rebalancing scheduled [order=[ignite-sys-cache, ON_HEAP_CACHE], 
top=AffinityTopologyVersion [topVer=2, minorTopVer=0], rebalanceId=1, 
evt=NODE_JOINED, node=8138d15d-1606-4eb1-8359-d5637d52]

This means: ignite-sys-cache will rebalance first and ON_HEAP_CACHE after.

After all future completed

Completed rebalance future: RebalanceFuture [grp=CacheGroupContext 
[grp=ignite-sys-cache] ...

Here a code receive a message about rebalance stopped on ignite-sys-cache.

Completed rebalance future: RebalanceFuture [grp=CacheGroupContext 
[grp=ON_HEAP_CACHE] ...

Here rebalance stopped on ON_HEAP_CACHE.

You will see a topology switch on minor version

Started exchange init [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1]
...
Completed partition exchange [localNode=8138d15d-1606-4eb1-8359-d5637d52, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=2, minorTopVer=1]...

And only after this exchange completed you can see a new primary partition in 
joined node.

It is what happens now.
I really don’t know how to change this behavior that it will more convenient to 
user. 
If you still have use case where needs to know exactly moment of switching 
affinity, could you move this discussion to developer list? 
I hope developers can help us.

On 2020/07/08 21:21:37, Humphrey  wrote: 
> Rebouncing this topic, the ticket is still open (almost 4 years). 
> Any progress / priority to this ticket or work around?
> https://issues.apache.org/jira/browse/IGNITE-3362.
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> 


Re: Affinity calls in stream receiver

2018-07-30 Thread Vladislav Pyatkov
Hi David,

I think if you have two various classes only, Metaspace should not contain
6000 classes.
If I am not wrong, serve will contain that two classes from each client
owner node (after one of client node leave topology - classes will unload
from Metaspace).

In othervice, please provide reproduction example, where Metospace overflow
happends.


On Wed, Jul 18, 2018 at 1:45 AM, Dave Harvey  wrote:

> We switched to CONTINUOUS mode based on the assumption that SHARED mode had
> regressed in a way that allowed it to create many class loaders, and
> eventually run out of Metaspace.
>
> CONTINUOUS mode failed much sooner, and we were able to reproduce that
> failure and identify bugs in the code.   The code that tries to handle
> cycles in a graph search fails the search on a cycle rather than just
> breaking the recursion.
> Added https://issues.apache.org/jira/browse/IGNITE-9026
>
> Note: we did conclude that this is unrelated to nested or anonymous
> classes,
> as we originally assumed.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Vladislav Pyatkov


Re: Data lose in query

2017-12-11 Thread Vladislav Pyatkov
Hi,

When you use JOIN, you should to enable DistributedJoins flag[1], or tack
care about collocated of each joined entry[2].

[1]: org.apache.ignite.cache.query.SqlFieldsQuery#setDistributedJoins
[2]: https://apacheignite.readme.io/docs

On Mon, Dec 11, 2017 at 11:36 AM, Ahmad Al-Masry <ma...@harri.com> wrote:

> Dears;
> The when I execute the attached query on Mysql data source or on a single
> node ignite, it returns about 25k records.
> When multiple node, it gives me about 3500 records.
> The caches are atomic and partitioned.
> Any suggestions.
> BR
>
> --
>
>
>
> This email, and the content it contains, are intended only for the persons
> or entities to which it is addressed. It may contain sensitive,
> confidential and/or privileged material. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended
> recipient(s) is prohibited. If you received this email in error, please
> immediately contact security[at]harri[dot]com and delete it from any device
> or system on which it may be stored.
>



-- 
Vladislav Pyatkov


Re: Node not starting and waiting for ever

2017-02-17 Thread Vladislav Pyatkov
Hi,

The metrics are appear with specific frequency[1], and it does not mean the
node a hung. This is normal behavior.

About the row:

^-- Non heap [used=137MB, free=-1%, comm=139MB]

If the JVM non-heap memory size is unlimited, the free size is showing "-1".

Why are you think this nodes did not started?

[1]:
https://ignite.apache.org/releases/mobile/org/apache/ignite/configuration/IgniteConfiguration.html#setMetricsLogFrequency(long)

On Thu, Feb 16, 2017 at 10:17 PM, Ranjit Sahu <ranjit.s...@gmail.com> wrote:

> Hi,
>
> We are trying to start the Ignite node on spark worker node. When i try to
> start 10 nodes, few starts and few not and its in hung state.
>
> The log shows non heap free -1% and the log is below. Any clue whats
> happening and how to fix this ?
>
> 17/02/16 13:04:54 INFO IgniteKernal%WCA:
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=a5f2ef72, name=WCA, uptime=00:12:00:068]
> ^-- H/N/C [hosts=8, nodes=8, CPUs=336]
> ^-- CPU [cur=0.03%, avg=0.1%, GC=0%]
> ^-- Heap [used=687MB, free=94.96%, comm=2609MB]
> ^-- Non heap [used=137MB, free=-1%, comm=139MB]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=0, qSize=0]
> ^-- Outbound messages queue [size=0]
> 17/02/16 13:05:54 INFO IgniteKernal%WCA:
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=a5f2ef72, name=WCA, uptime=00:13:00:077]
> ^-- H/N/C [hosts=8, nodes=8, CPUs=336]
> ^-- CPU [cur=0.03%, avg=0.1%, GC=0%]
> ^-- Heap [used=699MB, free=94.87%, comm=2609MB]
> ^-- Non heap [used=137MB, free=-1%, comm=139MB]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=0, qSize=0]
> ^-- Outbound messages queue [size=0]
> 17/02/16 13:06:54 INFO IgniteKernal%WCA:
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=a5f2ef72, name=WCA, uptime=00:14:00:085]
> ^-- H/N/C [hosts=8, nodes=8, CPUs=336]
> ^-- CPU [cur=0.03%, avg=0.09%, GC=0%]
> ^-- Heap [used=713MB, free=94.78%, comm=2609MB]
> ^-- Non heap [used=137MB, free=-1%, comm=139MB]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=0, qSize=0]
> ^-- Outbound messages queue [size=0]
>
>
> Thanks,
>
> Ranjit
>
>


-- 
Vladislav Pyatkov
Architect-Consultant "GridGain Rus" Llc.
+7 963 716 68 99


Re: 答复: persist only on delete

2017-02-06 Thread Vladislav Pyatkov
Hi,

I think it will be work, because each time when "get"  from a cache Ignite
try to get the value from persistence story, if it will not find in memory.

On Mon, Feb 6, 2017 at 10:39 AM, Shawn Du <shawn...@neulion.com.cn> wrote:

> Hi,
>
>
>
> In delete method, we can only get cache key.  In order to get cache entry
> value, we need get Ignite instance and get the cache.
>
>
>
> Assume we already get all of them. Can we get value in the cache by the
> key? Which is still valid in cache?
>
>
>
> Thanks
>
> Shawn
>
>
>
> *发件人:* Shawn Du [mailto:shawn...@neulion.com.cn]
> *发送时间:* 2017年2月6日 15:12
> *收件人:* user@ignite.apache.org
> *主题:* persist only on delete
>
>
>
> Hi,
>
>
>
> I have a case which cache entry will be update frequently, and I only want
> to persist it when the cache is manually remove(cache will not change
> anymore) by me.
>
>
>
> For this case, It is a good idea to implement persist logic in
> delete/deleteAll method while write/writeAll do nothing?
>
>
>
> @Override
> public void delete(Object o)
> {
> //do nothing, we never have this operation.
> }
>
>
>
>
>
> Thanks
>
> Shawn
>



-- 
Vladislav Pyatkov


Re: Ignite java and xml configuration

2017-01-27 Thread Vladislav Pyatkov
Hi Anil,

You can use configuration cache in xml with annotation query field.
But you if you are use @SqlQueryField, then you should use
CacheConfiguration#setIndexedTypes property instead of QueryEntity.

On Fri, Jan 27, 2017 at 12:06 PM, Anil  wrote:

> Hi,
>
> we are using combination of java and xml configuration for ignite.
>
> for now, cache configuration uses java configuration and  swap, discovery
> uses xml configuration.
>
> Is there any way to move the cache configuration to xml which uses the
> java annotations ? thanks.
>
> Below is the current cache configuration.
>
> CacheConfiguration pConfig = new
> CacheConfiguration();
> pConfig.setName("PERSON_CACHE");
> pConfig.setBackups(1);
> pConfig.setCacheMode(CacheMode.PARTITIONED);
> pConfig.setIndexedTypes(AffinityKey.class, Person.class);
> pConfig.setCopyOnRead(false);
> pConfig.setSwapEnabled(true);
> pConfig.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
> pConfig.setOffHeapMaxMemory(1024 * 1024 * 1024 * 8);
>
>
> i set the query entities in xml and tried to use @SqlQueryField for Person
> pojo fields and did not work.
>
> Thanks.
>
>


Re: Asynchronous jobs (not the scheduling)

2017-01-25 Thread Vladislav Pyatkov
Hi Sergei,

Why do you not use "ignite.compute().withAsync()"?

On Wed, Jan 25, 2017 at 2:04 PM, Sergei Egorov <bsid...@gmail.com> wrote:

> Hi all,
>
> I'm wondering if it's possible not to block the job executor's thread when
> I wait for some async event.
>
> somethingLike:
>
> ```
> ignite.compute().run(() -> someServiceRefSomehow.doSomethingAsync());
> ```
>
> where
> ```
> interface SomeService {
> CompletableFuture doSomethingAsync();
> }
> ```
>
> Or even better:
>
> ```
> interface SomeService {
> Observable doSomethingAsync();
> }
> ```
>
> Thanks!
>



-- 
Vladislav Pyatkov


Re: 答复: how to increase CPU utilization to increase compute performance

2017-01-18 Thread Vladislav Pyatkov
Hi,

If you want to estimate a reason you need to use profile and after make a
conclusion.

You should to check your application (using flight recorder[1] for example)
and make a investigation where are threads stopping. How long do threads
spend time in parking state? What is reason of this?

[1]:
https://apacheignite.readme.io/docs/jvm-and-system-tuning#section-detailed-garbage-collection-stats

On Wed, Jan 18, 2017 at 1:42 PM, Shawn Du <shawn...@neulion.com.cn> wrote:

> This is the configuration. All just default.
>
>
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> localhost:47500..47509
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
> When running the job, both IO and CPU load are very low.
>
> Now I am testing split less jobs. Also tune the publicThreadPoolSize.  It
> seems that splitting less jobs has positive effect.
>
> Tune the publicThreadPoolSize high has negative effect.  Please confirm
> and give suggestions.
>
>
>
> Thanks
>
> Shawn
>
>
>
> *发件人:* Artem Schitow [mailto:artem.schi...@gmail.com]
> *发送时间:* 2017年1月18日 18:08
> *收件人:* user@ignite.apache.org
> *主题:* Re: how to increase CPU utilization to increase compute performance
>
>
>
> Hi, Shawn!
>
>
>
> Can you please attach you Ignite configuration? What are your disk I/O
> load and CPU load when you’re running the job?
>
>
> —
>
> Artem Schitow
>
> artem.schi...@gmail.com
>
>
>
>
>
>
>
> On 18 Jan 2017, at 13:02, Shawn Du <shawn...@neulion.com.cn> wrote:
>
>
>
> Hi,
>
>
>
> I have a task to compute on ignite. My Service has 8 cores. I split the
> task into more than 1K jobs and merge the result.
>
> From client see, the task run more than 3 seconds, and sometimes more than
> 10 seconds. The ignite server load is very slow.
>
> I wonder to know how to increase the CPU utilization to increase the
> performance?
>
>
>
> Thanks
>
> Shawn
>
>
>



-- 
Vladislav Pyatkov


Re: Exception while resolving topology version.

2017-01-10 Thread Vladislav Pyatkov
Hi Tolga,

Why is minor topology version growing so fast?
Are you creating caches endless during the work?

On Fri, Jan 6, 2017 at 11:23 AM, dkarachentsev <dkarachent...@gridgain.com>
wrote:

> Hi Tolga,
>
> Please attach full logs from client and server, and your Ignite
> configurations. Do you constantly create/destroy caches during runtime?
>
> -Dmitry
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Exception-while-resolving-topology-
> version-tp9926p9927.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Unable to start 2 server nodes on same machine

2017-01-10 Thread Vladislav Pyatkov
Hi,

This could be of incorrect system configuration of multicast, which lead to
hangs of discovery phase.
Could you try to check it in a latest version (1.8) without multicast?

On Tue, Jan 10, 2017 at 10:13 AM, chaitanya kulkarni <9...@gmail.com
> wrote:

> Btw this was v 1.4.0
>
> On Jan 10, 2017 07:12, "chaitanya kulkarni" <9...@gmail.com>
> wrote:
>
>> Hi,
>>
>> For my topology- I need all nodes to be server nodes - which share data
>> with each other on demand.I've configured them to use same multicast
>> address so they can discover each other / share data amongst themselves.
>>
>> 1. When I start a server with a grid name- it comes up fine - but it
>> shows ignition state as STOPPED in JMX- with no exceptions(have debug log
>> ON).
>> Why?
>>
>> 2. When I try to start second server instance with same grid name.
>> Ignition.start(gridName) call hangs ! Am I doing something wrong?
>>
>>  Please help...
>>
>>


-- 
Vladislav Pyatkov


Re: Update value in stream transformer

2017-01-10 Thread Vladislav Pyatkov
Hi,

In general, you do not know what data has been stored in the cache and
which stay in the stream (it depends on the configuration of the flushing
and streamer capacity).
You can access the cache directly using cache.get (key), but in this case
part of the data may be in the stream.
Therefore, if you want to get the latest actual value, you need to perform
streamer.flust () or manually save values by key anywhere.

On Tue, Jan 10, 2017 at 7:54 AM, rishi007bansod <rishi007ban...@gmail.com>
wrote:

> But using entry processor we have to set value manually using e.setvalue()
> method, instead i want to store value from incoming stream as it is(since i
> don't know what data is in stream i cant set it manually). So how can we
> access current stream value in transformer instead of previous one?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Update-value-in-stream-transformer-tp9974p9985.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Error while writing to oracle DB

2016-12-27 Thread Vladislav Pyatkov
gt; GridDhtAtomicCache.java:1652)
> ... 16 more
> Suppressed: class org.apache.ignite.IgniteCheckedException:
> Failed to
> write entry to database [table=C##TPCCTEST.ORDERS5, entry=Entry
> [key=Orders5Key [oId=3015, oDId=7, oWId=4], val=Orders5 [oId=3015, oDId=7,
> oWId=4, oCId=2020, oEntryD=2016-12-27 15:04:34.172, oCarrierId=null,
> oOlCnt=12, oAllLocal=1]]]
> at
> org.apache.ignite.internal.processors.cache.store.
> GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:583)
> at
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(
> GridCacheMapEntry.java:2425)
> at
> org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2252)
> ... 17 more
> Caused by: javax.cache.integration.CacheWriterException:
> Failed to write
> entry to database [table=C##TPCCTEST.ORDERS5, entry=Entry [key=Orders5Key
> [oId=3015, oDId=7, oWId=4], val=Orders5 [oId=3015, oDId=7, oWId=4,
> oCId=2020, oEntryD=2016-12-27 15:04:34.172, oCarrierId=null, oOlCnt=12,
> oAllLocal=1]]]
> at
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.
> write(CacheAbstractJdbcStore.java:1020)
> at
> org.apache.ignite.internal.processors.cache.store.
> GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:575)
> ... 19 more
> Caused by: java.sql.SQLException: Listener refused the
> connection with the
> following error:
> ORA-12519, TNS:no appropriate service handler found
>
> at oracle.jdbc.driver.T4CConnection.logon(
> T4CConnection.java:673)
> at
> oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:711)
> at oracle.jdbc.driver.T4CConnection.(
> T4CConnection.java:385)
> at
> oracle.jdbc.driver.T4CDriverExtension.getConnection(
> T4CDriverExtension.java:30)
> at oracle.jdbc.driver.OracleDriver.connect(
> OracleDriver.java:558)
> at
> oracle.jdbc.pool.OracleDataSource.getPhysicalConnection(
> OracleDataSource.java:297)
> at
> oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:224)
> at
> oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:169)
> at
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.openConnection(
> CacheAbstractJdbcStore.java:322)
> at
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.connection(
> CacheAbstractJdbcStore.java:352)
> at
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.
> write(CacheAbstractJdbcStore.java:978)
> ... 20 more
> Caused by: oracle.net.ns.NetException: Listener refused
> the connection
> with the following error:
> ORA-12519, TNS:no appropriate service handler found
>
> at
> oracle.net.ns.NSProtocolStream.negotiateConnection(
> NSProtocolStream.java:272)
> at oracle.net.ns.NSProtocol.
> connect(NSProtocol.java:263)
> at oracle.jdbc.driver.T4CConnection.connect(
> T4CConnection.java:1360)
> at oracle.jdbc.driver.T4CConnection.logon(
> T4CConnection.java:486)
> ... 30 more
>
> What can be cause for this error?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Error-while-writing-to-oracle-DB-tp9740.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: byte[] as a key in Ignite cache

2016-12-26 Thread Vladislav Pyatkov
Hi,

Your question does not almost clear for me.

>> I realize that strings do take double the space as compared to bytes in
java.

It is not always correct. How about none latin characters?
If you are able to pack string, of course it is good idea for reduce
consume memory.

About byte[] as key, I think you will have hash code issue. You should to
implement key which will by wrapper over byte array.

On Sun, Dec 25, 2016 at 4:39 PM, Oru <debasish.upadh...@gmail.com> wrote:

> Hi There!
> I have a situation where I need to have a Java String as a key in a
> distributed Ignite Cache across multiple server nodes.
>
> I realize that strings do take double the space as compared to bytes in
> java.
>
> So it is possible or recommended to use a byte[] as a key in Ignite Cache?
>
> What's the expert advise on this?
>
> Thanks in advance
> Debasish
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/byte-as-a-key-in-Ignite-cache-tp9728.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: How to save odd number objects on one node and even number objects on another node

2016-12-16 Thread Vladislav Pyatkov
I think, simplest AffinityKeyMapper[1] will be sufficient:

public Object affinityKey(Object key) {
if (key instanceof Integer)
return ((Integer)key) % 2;
}

and configuration like this:


   
   
   
...

[1]: org.apache.ignite.cache.affinity.AffinityKeyMapper

On Fri, Dec 16, 2016 at 12:34 AM, rishireddy.bokka <
rishireddy.bo...@gmail.com> wrote:

> Hi Ignite Team,
> I recently started using Ignite and seems very useful.
> I am having 2 nodes(n1, n2) and I have like 100,000 objects (say Person
> object)with keys as integers starting from 1 till 100,000 that should be
> saved on both nodes.
> I want 1,3,5,7,9... person objects to be saved on node n1 and
> 2,4,6,8, person objects to be saved on node n2.
> Is it possible to achieve this using partitioned cache mode? If yes could
> you please let me know how?
>
> Thanks,
> Rishi
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-to-save-odd-number-objects-on-
> one-node-and-even-number-objects-on-another-node-tp9572.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Building complex queries to query ignite Cache

2016-12-05 Thread Vladislav Pyatkov
Hi,

I see a way to improve the approach.
1) You use "affinity collocation"[1] by TradeType (that allow to match
partition of cache and TreadeType).

2) Add index by TradeStatus[2] (that accelerate a SQL query by TradeStatus).

3) Use affinity call[3] (with asinc) by particular TradeType and execute a
SQL like this
Select TradeType, Count(*) where Trade.TradeStatus = %1
with flag "o.a.i.cache.query.SqlQuery#setLocal()".

4) After execute this, you can aggregate all results to table like yours.

[1]: https://apacheignite.readme.io/docs/affinity-collocation
[2]: https://apacheignite.readme.io/docs/indexes
[3]:
https://apacheignite.readme.io/docs/collocate-compute-and-data#affinity-call-and-run-methods

On Mon, Dec 5, 2016 at 4:56 PM, Andrey Mashenkov <andrey.mashen...@gmail.com
> wrote:

> Hi,
> You can try to add index with setting: 
> cacheConfig.setIndexedTypes(Integer.class,
> Trade.class);
> and annotate "status"  field with  @QuerySqlField(index = true)
>
> Then you will be able to make sql query with grouping, smth like: "select
> count(*) from Trade group by status;
>
> If you need to group by multiple fields:
> Create group index with annotating class Trade with @QueryGroupIndex(name
> = "%group_name%")
> Add field to group index with annotate  field with   @QuerySqlField(index
> = true) and @QuerySqlField.Group(name = "%group_name%", order =
> %field_order_in_group%)
> You are free to choose %group_name%.
>
>
> On Mon, Dec 5, 2016 at 4:20 PM, begineer <redni...@gmail.com> wrote:
>
>> Hi, I have below sample bean which I am storing as value in cache. I want
>> to
>> build a map such that it gives me count of trade status for each trade
>> type(Pls see sample output, done thru java 8 streams).
>> Problem with this approach is I have to pull millions of entries from
>> cache
>> to some collection and manipulate them.
>>
>> Is there a way to query cache using SQL/ScanQueries to build same map in
>> more efficient way. Below is my sample code to explain the problem.
>>
>> public class TradeCacheExample {
>> public static void main(String[] args) {
>> Trade trade1 = new Trade(1, TradeStatus.NEW, "type1");
>> Trade trade2 = new Trade(2, TradeStatus.FAILED, "type2");
>> Trade trade3 = new Trade(3, TradeStatus.NEW, "type1");
>> Trade trade4 = new Trade(4, TradeStatus.NEW, "type3");
>> Trade trade5 = new Trade(5, TradeStatus.CHANGED, "type2");
>> Trade trade6 = new Trade(6, TradeStatus.EXPIRED, "type1");
>>
>> Ignite ignite = Ignition.start("examples/confi
>> g/example-ignite.xml");
>> CacheConfiguration<Integer, Trade> config = new
>> CacheConfiguration<>("mycache");
>> config.setIndexedTypes(Integer.class, Trade.class);
>> IgniteCache<Integer, Trade> cache =
>> ignite.getOrCreateCache(config);
>> cache.put(trade1.getId(), trade1);
>> cache.put(trade2.getId(), trade2);
>> cache.put(trade3.getId(), trade3);
>> cache.put(trade4.getId(), trade4);
>> cache.put(trade5.getId(), trade5);
>> cache.put(trade6.getId(), trade6);
>> List trades = cache.query(new ScanQuery<Integer,
>> Trade>()).getAll().stream().map(item->item.getValue()).collect(toList());
>>
>> Map<String, MapTradeStatus, Long>> resultMap =
>> trades.stream().collect(
>> groupingBy(item -> item.getTradeType(),
>> groupingBy(Trade::getStatus,
>> counting(;
>> System.out.println(resultMap);
>> //{type3={NEW=1}, type2={CHANGED=1, FAILED=1},
>> type1={EXPIRED=1, NEW=2}}
>> }
>> }
>>
>> public class Trade {
>> private int id;
>> private TradeStatus status;
>> private String tradeType;
>> public Trade(int id, TradeStatus status, String tradeType) {
>> this.id = id;
>> this.status = status;
>> this.tradeType = tradeType;
>>     }
>>
>> //setter getter, equals, hashcode methods
>>
>> public enum TradeStatus {
>> NEW, CHANGED, EXPIRED, FAILED, UNCHANGED
>> }
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Building-complex-queries-to-query-ignite-
>> Cache-tp9392.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> С уважением,
> Машенков Андрей Владимирович
> Тел. +7-921-932-61-82
>
> Best regards,
> Andrey V. Mashenkov
> Cerr: +7-921-932-61-82
>



-- 
Vladislav Pyatkov


Re: Building complex queries to query ignite Cache

2016-12-05 Thread Vladislav Pyatkov
Hi

You should explain your task in detail.
What for are you get all entries as map in one node? You can get increase
performance if you will get only local entries (or particular partition
entries on thread) on node.
What will you do with huge map? You can to get only specific part of
entries (using SQL by indexed fields).

I think, you should review algorithm, in order to it could support
distributed paradigm.

On Mon, Dec 5, 2016 at 4:20 PM, begineer <redni...@gmail.com> wrote:

> Hi, I have below sample bean which I am storing as value in cache. I want
> to
> build a map such that it gives me count of trade status for each trade
> type(Pls see sample output, done thru java 8 streams).
> Problem with this approach is I have to pull millions of entries from cache
> to some collection and manipulate them.
>
> Is there a way to query cache using SQL/ScanQueries to build same map in
> more efficient way. Below is my sample code to explain the problem.
>
> public class TradeCacheExample {
> public static void main(String[] args) {
> Trade trade1 = new Trade(1, TradeStatus.NEW, "type1");
> Trade trade2 = new Trade(2, TradeStatus.FAILED, "type2");
> Trade trade3 = new Trade(3, TradeStatus.NEW, "type1");
> Trade trade4 = new Trade(4, TradeStatus.NEW, "type3");
> Trade trade5 = new Trade(5, TradeStatus.CHANGED, "type2");
> Trade trade6 = new Trade(6, TradeStatus.EXPIRED, "type1");
>
> Ignite ignite = Ignition.start("examples/
> config/example-ignite.xml");
> CacheConfiguration<Integer, Trade> config = new
> CacheConfiguration<>("mycache");
> config.setIndexedTypes(Integer.class, Trade.class);
> IgniteCache<Integer, Trade> cache =
> ignite.getOrCreateCache(config);
> cache.put(trade1.getId(), trade1);
> cache.put(trade2.getId(), trade2);
> cache.put(trade3.getId(), trade3);
> cache.put(trade4.getId(), trade4);
> cache.put(trade5.getId(), trade5);
> cache.put(trade6.getId(), trade6);
> List trades = cache.query(new ScanQuery<Integer,
> Trade>()).getAll().stream().map(item->item.getValue()).collect(toList());
>
> Map<String, MapTradeStatus, Long>> resultMap =
> trades.stream().collect(
> groupingBy(item -> item.getTradeType(),
> groupingBy(Trade::getStatus,
> counting(;
> System.out.println(resultMap);
> //{type3={NEW=1}, type2={CHANGED=1, FAILED=1},
> type1={EXPIRED=1, NEW=2}}
> }
> }
>
> public class Trade {
> private int id;
> private TradeStatus status;
> private String tradeType;
> public Trade(int id, TradeStatus status, String tradeType) {
> this.id = id;
> this.status = status;
> this.tradeType = tradeType;
> }
>
> //setter getter, equals, hashcode methods
>
> public enum TradeStatus {
>     NEW, CHANGED, EXPIRED, FAILED, UNCHANGED
> }
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Building-complex-queries-to-query-
> ignite-Cache-tp9392.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Package jar file path for sql Query Entity

2016-12-01 Thread Vladislav Pyatkov
Hi,

Ignite stores date as it specify in marshaller. Binary marshaller was
configured by default.
In case, if Binary marshaller was specified, all internal works will be
processed in binary view (hence do not need to store java classes on
server).
If you want to use binary view in own code, you should use "withKeepBinary"
cache method[1].
If your classes do not implement Binarylizable, the marshaller will get
values through reflection and builds binary view.
Is that clear explanation?

[1]:
https://apacheignite.readme.io/docs/binary-marshaller#binaryobject-cache-api

On Thu, Dec 1, 2016 at 5:02 PM, rishi007bansod <rishi007ban...@gmail.com>
wrote:

> Thanks. After importing same package on server and client side solved my
> problem(initially i wrote class in client node instead of importing it).
>
> But, I am still unclear about how ignite stores data
> case I  : In both Binary and serialized format, and de-serializes data when
> required
> case II : Only in binary format, and then there is no need of
> de-serialization
>
> Can you please explain what is default behavior of ignite for storing
> data(is it case I or case II). Also I have configured cache from xml file
> that I have attached and I have used binarylizable interface so does this
> mean my objects are stored in binary format only?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Getting-null-query-output-after-using-
> QueryEntity-tp9217p9331.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Package jar file path for sql Query Entity

2016-12-01 Thread Vladislav Pyatkov
Hi,

I think in your case (when you configured "QueryEntity"), have not
necessary to copy classe (schema.jar) to a server. Ignite will by use
binary[1] format, and execute SQL (or other query) without deserialization.

But I definitly do not understand why it does not work...
You can to clarify are all data structures created as are you think or not.
Please, check it by using H2 debug console[2].

[1]: https://apacheignite.readme.io/docs/binary-marshaller
[2]:
https://apacheignite.readme.io/v1.7/docs/performance-and-debugging#using-h2-debug-console

On Tue, Nov 29, 2016 at 7:17 AM, rishi007bansod <rishi007ban...@gmail.com>
wrote:

> I have added warehouse.java containing class warehouse to package schema.
> Thats why I wrote schema.warehouse. Query I am executing is SELECT * from
> warehouse w WHERE w.w_id = 1 and entry corresponding to w_id = 1 is present
> in warehouse cache. schema.jar
> <http://apache-ignite-users.70518.x6.nabble.com/file/n9252/schema.jar>
> this is my jar file.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Getting-null-query-output-after-using-
> QueryEntity-tp9217p9252.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Cache transformations

2016-11-30 Thread Vladislav Pyatkov
Hi,

DataStreamer is the best way to load data to cache (even localLoadCache was
implemented over a streamer).
I just want to say, you should not use a DataStreamer in a thread of Ignite
compute (do it asynchronously). Also you can use one instance of Streamer
by node, if you want to better performance..

On Wed, Nov 30, 2016 at 11:54 AM, nskovpin <kolehan...@gmail.com> wrote:

> Thanks, your answer solved my speed problem. But what about saving data
> into
> a cache? Is there a good approach to save my entities directly in local
> nodes? I know that ignite has a IgniteCache.localLoadCache() method, but it
> requires a CacheStore (I dont need it).
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Cache-transformations-tp9219p9290.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Performance question

2016-11-29 Thread Vladislav Pyatkov
Hi Alisher,

It look doubt for me. You parallelize the job, but got a performance
decrease.
I recommend to use a java profiler and try to separate long time methods.

How are you get a list of local partition (can it contain excess numbers)?
And please check, has forckjoin pool enough size:

-Djava.util.concurrent.ForkJoinPool.common.parallelism=1024


On Nov 28, 2016 8:39 PM, "Alisher Alimov"  wrote:

> I found only one way to parallelize read via ScanQuery
>
> int[] partitions = 
> this.ignite.affinity("test.cache").primaryPartitions(this.ignite.cluster().node());
>
> startTime = System.currentTimeMillis();
>
> Arrays.stream(partitions)
>   .parallel()
>   .forEach(partition -> {
>   ScanQuery qry = new ScanQuery<>(partition);
>   qry.setLocal(true);
>   qry.setPageSize(5_000);
>
>   QueryCursor> query = cache.query(qry);
>   List> all = query.getAll();
>   });
>
> System.out.println(String.format("Complete in: %dms", 
> System.currentTimeMillis() - startTime));
>
>
> But it’s doesn’t help a lot (speed was downgrade on 10-20%) or there is
> another good solution to do it?
>
>
>
> With best regards
> Alisher Alimov
> alimovalis...@gmail.com
>
>
>
>
> On 28 Nov 2016, at 19:38, Alexey Goncharuk 
> wrote:
>
> Hi Alisher,
>
> As Nicolae suggested, try parallelizing your scan using per-partition
> iterator. This should give you almost linear performance growth up to the
> number of available CPUs.
> Also make sure to set CacheConfiguration#copyOnRead flag to false.
>
> --AG
>
> 2016-11-28 19:31 GMT+03:00 Marasoiu Nicolae :
>
>> ​Regarding CPU load, a single thread of execution exists in the program
>> so (at most) one core is used. So if you have 8 cores, it means that it is
>> 8 to 16 times slower than a program able to use all the cores & CPU
>> redundancy of the machine.
>>
>> In my tests, indeed, a core looks fully utilized. To me, scanning 1M
>> key-values per second is pretty ok, but indeed, if LMAX got 6M transactions
>> per core per second, it can perhaps go up, but something tells me this will
>> not be the limitation of the typical application.
>>
>>
>> Met vriendelijke groeten / Meilleures salutations / Best regards
>>
>> *Nicolae Marasoiu*
>> *Agile Developer*
>>
>> *E*  *nicolae.maras...@cegeka.com *
>>
>> CEGEKA 15-17 Ion Mihalache Blvd. Tower Center Building,
>> 4th,5th,6th,8th,9th fl
>> RO-011171 Bucharest (RO), Romania
>> *T* +40 21 336 20 65
>> *WWW.CEGEKA.COM *  [image: LinkedIn]
>> 
>> --
>> *De la:* Alisher Alimov 
>> *Trimis:* 28 noiembrie 2016 15:27
>> *Către:* user@ignite.apache.org
>> *Subiect:* Performance question
>>
>> Hello!
>>
>> I have write and run a simple performance test to check
>> IgniteCache#localEntries and found that current method is not enough fast.
>>
>> Ignite ignite = Ignition.start();
>>
>>
>> CacheConfiguration cacheConfiguration = new 
>> CacheConfiguration<>();
>> cacheConfiguration.setBackups(0);
>>
>> IgniteCache cache = ignite.getOrCreateCache("test.cache");
>>
>> for (int i = 0; i < 1_000_000; i++) {
>> cache.put(UUID.randomUUID(), UUID.randomUUID());
>> }
>>
>> long startTime = System.currentTimeMillis();
>>
>> cache.localEntries(CachePeekMode.PRIMARY).forEach(entry -> {
>> });
>>
>> System.out.println(String.format("Complete in: %dms", 
>> System.currentTimeMillis() - startTime));
>>
>>
>> Reading local entries take about 1s (1000 rows per ms) that’s is low.
>> Test was run on server with provided configuration with default Ignite
>> configs, load average was about 0 and CPU was not busy more than 10%
>> Intel(R) Xeon(R) CPU   E5645  @ 2.40GHz
>>
>>
>> May be I do  or configure something wrong or current speed is normal?
>>
>>
>> With best regards
>> Alisher Alimov
>> alimovalis...@gmail.com
>>
>>
>>
>>
>>
>
>


Re: Package jar file path for sql Query Entity

2016-11-28 Thread Vladislav Pyatkov
Hi,

Is it correct class name?



If you want to save several different classes to the cache, you must set
some matched "QueryEntry" 's.
Could you please provide the jar file?

On Mon, Nov 28, 2016 at 2:17 PM, rishi007bansod <rishi007ban...@gmail.com>
wrote:

> I have written following query entity for my database
>
> up1.xml <http://apache-ignite-users.70518.x6.nabble.com/file/n9217/up1.xml
> >
>
> For this query entity I have created jar file of package schema, which
> contains warehouse class. I kept this jar file in libs folder in apache
> ignite installation path. But when I run query on "warehouse cache" that I
> have created I am always getting null(blank) result. Even if I remove this
> jar file from libs folder(and keeping same xml file), still the query runs
> but gives null(blank) result. I think this packaged jar file is not getting
> included when ignite is running. So where should this jar file kept, so
> that
> i can be automatically loaded when ignite starts?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Package-jar-file-path-for-sql-Query-Entity-tp9217.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: CacheConfiguration. AffinityFunction or node filter

2016-11-28 Thread Vladislav Pyatkov
Hi Alisher,

Of course, you should use the filter assembly (CacheConfiguration #
setNodeFilter), it provides an easy way to deploy a cache on a particular
node.
Affinity function takes care of the balance of these sections between
nodes. Implementation of the functions may not be so  easy.



On Mon, Nov 28, 2016 at 10:04 AM, Alisher Alimov <alimovalis...@gmail.com>
wrote:

> Hello!
>
> I have a cluster and want to store cache (primary and backups) on
> concretes nodes by filter. I can do it by providing AffinityFunction in
> org.apache.ignite.configuration.CacheConfiguration#setAffinity or
> IgnitePredicate in 
> org.apache.ignite.configuration.CacheConfiguration#setNodeFilter.
> Does they work the same way or what is the best practise?
>
> With best regards
> Alisher Alimov
> alimovalis...@gmail.com
>
>
>
>
>


-- 
Vladislav Pyatkov


Re: Swap space

2016-11-21 Thread Vladislav Pyatkov
Hi Kevin,

Check your log, is any OutOfMemory exception appear in a log?

On Mon, Nov 21, 2016 at 5:11 PM, Kevin Daly <ke...@meta.com> wrote:

> I think what he means is that no files are being created.. We are doing
> some
> tests with Ignite and we don't see any use of the cache, even when we load
> more keys than fit in physical memory.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Swap-space-tp8156p9112.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: need few clarification for some of our use cases

2016-11-15 Thread Vladislav Pyatkov
Hi Navneet,

Ignite supports various logging utilities (jul, log4j, slf4j etc.)[1]. I
used to log4j SocketAppender for issues like this.

Look at the Ignite transaction API[2], you got specific exception when
transaction was not commited.

Yes, you can use Cache Interceptor[3] for processing cache operation or
Continuous Queries[4] for tracking of cache entry updates.

[1]:
http://ignite.apache.org/releases/1.0.0/javadoc/org/apache/ignite/logger/log4j/Log4JLogger.html
[2]: https://apacheignite.readme.io/docs/transactions
[3]:
https://ignite.apache.org/releases/mobile/org/apache/ignite/cache/CacheInterceptor.html
[4]: https://apacheignite.readme.io/docs/continuous-queries

On Wed, Nov 16, 2016 at 10:10 AM, Navneet Kumar <
navneetkumar.in...@gmail.com> wrote:

> Hi All,
> I need few clarification for some of our use cases:
> -   Can we redirect the apache ignite logs to remote server ?
> -   How LRTs(pending transactions not committed)  can be identified
> and acted
> upon ?
> -   Does Apache ignite support DB triggers when Cache is updated.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/need-few-clarification-for-some-of-our-
> use-cases-tp9014.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: IN Query

2016-11-15 Thread Vladislav Pyatkov
Hi,

You can check it in 1.6.0 release or checkout master and check on it.

On Tue, Nov 15, 2016 at 5:05 PM, Anil <anilk...@gmail.com> wrote:

> Thank you. You saved my time.
>
> May i know working ignite version ? i see it is issue in h2 db itself.
>
> Thanks.
>
> On 15 November 2016 at 19:30, Vladislav Pyatkov <vldpyat...@gmail.com>
> wrote:
>
>> Hi Anil,
>>
>> You are right. I have checked this on not released version, but in 7.0.0
>> indexes are not used by some strange reason.
>> You can check the case in master or previous version, it worked earlier
>> and will work after (but 7.0.0 have bug).
>>
>> On Tue, Nov 15, 2016 at 2:36 PM, Anil <anilk...@gmail.com> wrote:
>>
>>> HI,
>>>
>>> i am still seeing no index used. Can you verify the below query please?
>>>
>>> explain select * from (
>>>
>>> ( select * from Person p join table(joinId varchar(10) =
>>> ('anilkd1','anilkd2')) i on p.id = i.joinId)
>>> UNION
>>> (select * from Person p join table(name varchar(10) = ('Anil1',
>>> 'Anil5')) i on p.name = i.name)
>>>
>>> ) order by id
>>>
>>> and explain plan -
>>>
>>> [[SELECT
>>> _0._KEY AS __C0,
>>> _0._VAL AS __C1,
>>> _0.NAME AS __C2,
>>> _0.ID AS __C3,
>>> _0.COMPANYID AS __C4,
>>> _0.JOINID AS __C5
>>> FROM (
>>> (SELECT
>>> P._KEY,
>>> P._VAL,
>>> P.NAME,
>>> P.ID,
>>> P.COMPANYID,
>>> I.JOINID
>>> FROM "person-map".PERSON P
>>> INNER JOIN TABLE(JOINID VARCHAR(10)=('anilkd1', 'anilkd2')) I
>>> ON 1=1
>>> WHERE P.ID = I.JOINID)
>>> UNION
>>> (SELECT
>>> P._KEY,
>>> P._VAL,
>>> P.NAME,
>>> P.ID,
>>> P.COMPANYID,
>>> I.NAME
>>> FROM "person-map".PERSON P
>>> INNER JOIN TABLE(NAME VARCHAR(10)=('Anil1', 'Anil5')) I
>>> ON 1=1
>>> WHERE P.ID = I.NAME)
>>> ) _0
>>> /* (SELECT
>>> P._KEY,
>>> P._VAL,
>>> P.NAME,
>>> P.ID,
>>> P.COMPANYID,
>>> I.JOINID
>>> FROM "person-map".PERSON P
>>>* /++ "person-map".PERSON.__SCAN_ ++/*
>>> INNER JOIN TABLE(JOINID VARCHAR(10)=('anilkd1', 'anilkd2')) I
>>> /++ function: JOINID = P.ID ++/
>>> ON 1=1
>>> WHERE P.ID = I.JOINID)
>>> UNION
>>> (SELECT
>>> P._KEY,
>>> P._VAL,
>>> P.NAME,
>>> P.ID,
>>> P.COMPANYID,
>>> I.NAME
>>> FROM "person-map".PERSON P
>>>* /++ "person-map".PERSON.__SCAN_ ++/*
>>> INNER JOIN TABLE(NAME VARCHAR(10)=('Anil1', 'Anil5')) I
>>> /++ function: NAME = P.ID ++/
>>> ON 1=1
>>> WHERE P.ID = I.NAME)
>>>  */
>>> ORDER BY 4], [SELECT
>>> __C0 AS _KEY,
>>> __C1 AS _VAL,
>>> __C2 AS NAME,
>>> __C3 AS ID,
>>> __C4 AS COMPANYID,
>>> __C5 AS JOINID
>>> FROM PUBLIC.__T0
>>> /* "person-map"."merge_scan" */
>>> ORDER BY 4]]
>>>
>>> Cache configuration :
>>>
>>> CacheConfiguration pCache = new
>>> CacheConfiguration<>("person-map");
>>> pCache.setIndexedTypes(String.class, Person.class);
>>> pCache.setBackups(0);
>>> pCache.setCacheMode(CacheMode.PARTITIONED);
>>> pCache.setCopyOnRead(false);
>>> pCache.setSwapEnabled(true);
>>> pCache.setOffHeapMaxMemory(100);
>>> pCache.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
>>>
>>>
>>> public class Person{
>>> @QuerySqlField(index = true)
>>> private String name;
>>> @QuerySqlField(index = true)
>>> private String id;
>>> @QuerySqlField
>>> private String companyId;
>>> private String value;
>>>
>>> // removed the getters and setters
>>> }
>>>
>>> SqlFieldsQuery sqlQuery = new SqlFieldsQuery("explain select * from ((
>>> select * from Person p join table(joinId varchar(10) =
>>> ('anilkd1','anilkd2')) i on p.id = i.joinId) UNION (select * from
>>> Person p join table(name varchar(10) = ('Anil1', 'Anil5')) i on p.name
>>> = i.name)) order by id");
>>>
>>> List<List> all = testMap.query(sqlQuery).getAll();
>>>
>>> Thanks
>>>
>>
>>
>>
>> --
>> Vladislav Pyatkov
>>
>
>


-- 
Vladislav Pyatkov


Re: IN Query

2016-11-15 Thread Vladislav Pyatkov
Hi Anil,

You are right. I have checked this on not released version, but in 7.0.0
indexes are not used by some strange reason.
You can check the case in master or previous version, it worked earlier and
will work after (but 7.0.0 have bug).

On Tue, Nov 15, 2016 at 2:36 PM, Anil <anilk...@gmail.com> wrote:

> HI,
>
> i am still seeing no index used. Can you verify the below query please?
>
> explain select * from (
>
> ( select * from Person p join table(joinId varchar(10) =
> ('anilkd1','anilkd2')) i on p.id = i.joinId)
> UNION
> (select * from Person p join table(name varchar(10) = ('Anil1', 'Anil5'))
> i on p.name = i.name)
>
> ) order by id
>
> and explain plan -
>
> [[SELECT
> _0._KEY AS __C0,
> _0._VAL AS __C1,
> _0.NAME AS __C2,
> _0.ID AS __C3,
> _0.COMPANYID AS __C4,
> _0.JOINID AS __C5
> FROM (
> (SELECT
> P._KEY,
> P._VAL,
> P.NAME,
> P.ID,
> P.COMPANYID,
> I.JOINID
> FROM "person-map".PERSON P
> INNER JOIN TABLE(JOINID VARCHAR(10)=('anilkd1', 'anilkd2')) I
> ON 1=1
> WHERE P.ID = I.JOINID)
> UNION
> (SELECT
> P._KEY,
> P._VAL,
> P.NAME,
> P.ID,
> P.COMPANYID,
> I.NAME
> FROM "person-map".PERSON P
> INNER JOIN TABLE(NAME VARCHAR(10)=('Anil1', 'Anil5')) I
> ON 1=1
> WHERE P.ID = I.NAME)
> ) _0
> /* (SELECT
> P._KEY,
> P._VAL,
> P.NAME,
> P.ID,
> P.COMPANYID,
> I.JOINID
> FROM "person-map".PERSON P
>* /++ "person-map".PERSON.__SCAN_ ++/*
> INNER JOIN TABLE(JOINID VARCHAR(10)=('anilkd1', 'anilkd2')) I
> /++ function: JOINID = P.ID ++/
> ON 1=1
> WHERE P.ID = I.JOINID)
> UNION
> (SELECT
> P._KEY,
> P._VAL,
> P.NAME,
> P.ID,
> P.COMPANYID,
> I.NAME
> FROM "person-map".PERSON P
>* /++ "person-map".PERSON.__SCAN_ ++/*
> INNER JOIN TABLE(NAME VARCHAR(10)=('Anil1', 'Anil5')) I
> /++ function: NAME = P.ID ++/
> ON 1=1
> WHERE P.ID = I.NAME)
>  */
> ORDER BY 4], [SELECT
> __C0 AS _KEY,
> __C1 AS _VAL,
> __C2 AS NAME,
> __C3 AS ID,
> __C4 AS COMPANYID,
> __C5 AS JOINID
> FROM PUBLIC.__T0
> /* "person-map"."merge_scan" */
> ORDER BY 4]]
>
> Cache configuration :
>
> CacheConfiguration pCache = new
> CacheConfiguration<>("person-map");
> pCache.setIndexedTypes(String.class, Person.class);
> pCache.setBackups(0);
> pCache.setCacheMode(CacheMode.PARTITIONED);
> pCache.setCopyOnRead(false);
> pCache.setSwapEnabled(true);
> pCache.setOffHeapMaxMemory(100);
> pCache.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
>
>
> public class Person{
> @QuerySqlField(index = true)
> private String name;
> @QuerySqlField(index = true)
> private String id;
> @QuerySqlField
> private String companyId;
> private String value;
>
> // removed the getters and setters
> }
>
> SqlFieldsQuery sqlQuery = new SqlFieldsQuery("explain select * from ((
> select * from Person p join table(joinId varchar(10) =
> ('anilkd1','anilkd2')) i on p.id = i.joinId) UNION (select * from Person
> p join table(name varchar(10) = ('Anil1', 'Anil5')) i on p.name = i.name))
> order by id");
>
> List<List> all = testMap.query(sqlQuery).getAll();
>
> Thanks
>



-- 
Vladislav Pyatkov


Re: Sql Query on binary objects

2016-11-15 Thread Vladislav Pyatkov
Hi,

I mean, what are differences you wait?
This SQL will be equivalent for binary, enough to add

IgniteCache<BinaryObject, BinaryObject> cache_customer =
ignite.createCache(ccfg_customer).cache.withKeepBinary();

On Tue, Nov 15, 2016 at 2:44 PM, rishi007bansod <rishi007ban...@gmail.com>
wrote:

> For example following is cache query I have written on customer
> table(without
> binary objects)
>
> CacheConfiguration<Object,Object> ccfg_customer = new
> CacheConfiguration<>();
> ccfg_customer.setIndexedTypes(customerKey.class, customer.class);
> ccfg_customer.setName("customer_cache");
> IgniteCache<Object, Object> cache_customer =
> ignite.createCache(ccfg_customer);
>
> sql = "SELECT count(c_id) FROM customer WHERE c_w_id = 6";
> sql_query = new SqlFieldsQuery(sql);
> res = cache_customer.query(sql_query).getAll();
>
> how can I convert this SqlFieldsQuery using binary objects?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Sql-Query-on-binary-objects-tp8992p8995.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Sql Query on binary objects

2016-11-15 Thread Vladislav Pyatkov
Hi,

If you use BinaryMarshaller server works with object in Binary format.
What are you meant: "convert existing cache queries"?
Could you please, explain what do you want to achieve?

On Tue, Nov 15, 2016 at 2:06 PM, rishi007bansod <rishi007ban...@gmail.com>
wrote:

> Is there any way we can convert existing cache queries to queries with
> binary
> objects without much code change? Can we use binary objects with
> SqlFieldsquery?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Sql-Query-on-binary-objects-tp8992.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: CacheInterceptor with client nodes

2016-11-15 Thread Vladislav Pyatkov
Hi Igor,

Why don't you implement CacheStore for this behavior?
You should implement own CacheStore and set "Read-Through" as true[1].
At this case all invokes "get" over cache will be reflect on primary node
(into CacheStore), each time when data can not be found on cache.

If you want to decrease live-time of entry into cache, you can use
expiry-policies[2].

[1]:
https://apacheignite.readme.io/docs/persistent-store#read-through-and-write-through
[2]: https://apacheignite.readme.io/docs/expiry-policies

On Tue, Nov 15, 2016 at 11:30 AM, Игорь Гнатюк <gnatyuk...@gmail.com> wrote:

> Hi,
> I need to call a native function via JNI on every "get" and "put" call to
> a specific cache. The call should be made on a server node, where this
> concrete enrty resides.
>
> 2016-11-15 2:34 GMT+03:00 vkulichenko <valentin.kuliche...@gmail.com>:
>
>> Hi,
>>
>> What exactly are you trying to do? What's the use case?
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/CacheInterceptor-with-client-nodes-tp8964p8970.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


-- 
Vladislav Pyatkov


Re: How can I detect DB writing abnormal in case of write-behind?

2016-11-14 Thread Vladislav Pyatkov
Hi,

I am not understand what you mean: "In this process, something wrong in the
DB, then data can not be written into it."

How did you detect this?
If "write behind" flag was set as true, a data will inserted into DB
asynchronously (in dedicated thread). You should to wait when the data will
be saved into DB.

By cache metrics you can watch over number of "put" operation on cache
(org.apache.ignite.cache.CacheMetrics#getCachePuts).

On Mon, Nov 14, 2016 at 5:44 PM, ght230 <ght...@163.com> wrote:

> Hello:
>
> I am trying to put some data to a cache configured with
> write-through and write-behind.
>
> In this process, something wrong in the DB, then data
> can not be written into it.
>
> I want to know how can I detect the database error as soon as possible?
>
> Can I detect it by the metrics of the CacheMetrics?
>
> If the answer is "YES", there are so many metrics in the
> class "org.apache.ignite.cache.CacheMetrics", which one can I use?
>



-- 
Vladislav Pyatkov


Re: Remote Server Thread Not exit when Job finished,Cause out of memory

2016-11-14 Thread Vladislav Pyatkov
Hi Alex,

You should do like this:

IgniteCompute compute = ignite.compute().withAsync();
compute.apply(...)
IgniteFuture future = compute.future();
...
future.cancel();

And handle Thread.interrupted() flag into closure.

On Mon, Nov 14, 2016 at 2:35 PM, alex <alexwan...@gmail.com> wrote:

> Thanks vdpyatkov.
>
> Currently, the number of threads is not the problem.
>
> The problem is that when client finished, how to finish threads on server
> node which created by this client.
>
> For example, client code is:
>
> String cacheKey = "jobIds";
> String cname = "myCacheName";
> ClusterGroup rmts =ignite.cluster().forRemotes();
> IgniteCache<String, ListString>> cache = ignite.getOrCreateCache(cname)
> ;
> List jobList = cache.get(cacheKey);
> Collection res = ignite.compute(rmts).apply(
> new IgniteClosure<String, String>() {
> @Override
> public String apply(String word) {
> return word;
> }
> },
> jobList
> );
> ignite.close();
> System.out.println("ignite Closed");
>
> if (res == null) {
> System.out.println("Error: Result is null");
> return;
> }
>
> res.forEach(s -> {
> System.out.println(s);
> });
> System.out.println("Finished!");
>
>
> When client initiate ignite instance, server side create 6 threads for this
> computing job.
> After client program exit,  the 6 threads still alive on server. And never
> exit until I kill the server.
> How can I finish threads after client job finished gracefully.
>
> Thanks for any suggestions
>
>
> vdpyatkov wrote
> > Hi Alex,
> > I think, these threads are executing into pools of threads, and number of
> > threads always restricted by pool size[1].
> > You can configure sizes manually:
> >
> >
> > 
> >
> > 
> > [1]:
> > https://apacheignite.readme.io/v1.7/docs/performance-tips#
> configure-thread-pools
>
>
> vdpyatkov wrote
> > Hi Alex,
> > I think, these threads are executing into pools of threads, and number of
> > threads always restricted by pool size[1].
> > You can configure sizes manually:
> >
> >
> > 
> >
> > 
> > [1]:
> > https://apacheignite.readme.io/v1.7/docs/performance-tips#
> configure-thread-pools
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-
> Job-finished-Cause-out-of-memory-tp8934p8939.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: How many nodes within your ignite cluster

2016-11-10 Thread Vladislav Pyatkov
Hi Duke,

I think, Ignite should be cope with any issue. For example: looped thread
which does not handle flag of interrupted.

But you always can to try do that, by way implement your segmentation
behavior.
Something like this:

ignite.events().localListen(new IgnitePredicate() {
@Override public boolean apply(Event event) {
(new Thread() {
 Ignition.inite().close();
 Ignition.start(new IgniteConfiguration());
}).start()

return true;
}
}, EventType.EVT_NODE_SEGMENTED);

Do not forget create new configuration, because Ignite changed state in it.

On Fri, Nov 11, 2016 at 4:51 AM, Duke Dai  wrote:

> Hi vdpyatkov,
>
> Finally, I figured out not only one but all nodes become segmentation due
> to
> unknown VMWare infrastructure.
> I changed failuredetectiontimeout/sockettimeout/networktimeout, and the
> cluster survived last day, need more time to observe their behavior.
>
> I'm thinking SegmentationPolicy.
> Why SegmentationPolicy.RESTART_JVM was provided(must work with
> CommandLineStartup)? Any limitation/implication that soft-restart(in same
> JVM) won't work?
>
>
> Thanks,
> Duke
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-many-nodes-within-your-ignite-
> cluster-tp8808p8892.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: ON HEAP vs OFF HEAP memory mode performance Apache Ignite

2016-11-10 Thread Vladislav Pyatkov
Hi,

Work with OFF_HEAP requires more CPU time than ON_HEAP, because each "get"
should get bytes from memory (by pointer to off_heap) and deserialize these
bytes to business object.

However you can use Binary interface in order to avoid deserialization:

IgniteCache<BinaryObject, BinaryObject> =
ignite.cache(CACHE_NAME).withKeepBinary();

BinaryObject binaryObject = cache.get(key);
binaryObject.field(FIELD_NAME);

and do not copy object on each "get" (if the object does not change after
"get"):




...

Also if remote data processing fit to you, you can use "invoke" (on
withKeepBinary cache). At this case work will be produced over off_heap
pointer (without copy bytes to heap).


public Object process(MutableEntry entry, Object... arguments) throws
EntryProcessorException {
BinaryObject binaryObject =
(BinaryObject)entry.getValue();
...

On Thu, Nov 10, 2016 at 11:17 AM, rishi007bansod <rishi007ban...@gmail.com>
wrote:

> Following are average execution time for running 14 queries against 16
> million entries (DB size: 370 MB)
>
> OFF HEAP memory mode - 47 millisec
> ON HEAP memory mode - 16 millisec
>
> why there is difference in execution times between off heap and on heap
> memory modes as both are In-memory? What performance tuning can be applied
> on off heap memory mode for better results?(I have also tried JVM tuning
> mentioned in Ignite documentation, but its not giving any better results)
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/ON-HEAP-vs-OFF-HEAP-memory-mode-
> performance-Apache-Ignite-tp8870.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Multiple servers in a Ignite Cluster

2016-11-09 Thread Vladislav Pyatkov
HI,

It will not always. Discovery SPI responsible for this process.
It will be may TcpDiscoveryMulticastIpFinder[1] (any node with same
configuration and starts in same network will be joined) or
TcpDiscoveryVmIpFinder[2] (only nodes with specific IP will be joined) or
any other.

[1]:
https://apacheignite.readme.io/v1.7/docs/cluster-config#multicast-based-discovery
[2]:
https://apacheignite.readme.io/v1.7/docs/cluster-config#isolated-ignite-clusters-on-the-same-set-of-machin

On Wed, Nov 9, 2016 at 7:33 PM, Tracyl <tlian...@bloomberg.net> wrote:

> Thanks. In that case, my question is how to define the scope of cluster(Or
> how to specify the cluster a server belongs to)? I assume if someone else
> start a ignite node, would my ignite server auto-discover it as well?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Multiple-servers-in-a-Ignite-Cluster-tp8840p8844.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Multiple servers in a Ignite Cluster

2016-11-09 Thread Vladislav Pyatkov
Hi Tracyl,

You are need to start each Ignite service separately.
But you always can do it over ssh using simple bash script.

On Wed, Nov 9, 2016 at 6:55 PM, Tracyl <tlian...@bloomberg.net> wrote:

> I have following ignite config:
>
> def initializeIgniteConfig() = {
> val ipFinder = new TcpDiscoveryVmIpFinder()
>
> val HOST = "xx.xx.xx.xx:47500..47509"
> ipFinder.setAddresses(Collections.singletonList(HOST))
>
> val discoverySpi = new TcpDiscoverySpi()
> discoverySpi.setIpFinder(ipFinder)
>
> val igniteConfig = new IgniteConfiguration()
> igniteConfig.setDiscoverySpi(discoverySpi)
>
> //Ignite uses work directory as a relative directory for internal
> write activities, for example, logging.
> // Every Ignite node (server or client) has it's own work directory
> independently of other nodes.
>
> igniteConfig.setWorkDirectory("/tmp")
> igniteConfig
> }
>
> My use case is: I would like to have multiple ignite servers, each server
> cache a subset of the data and I send distributed closures to each node to
> do local computation. In this case, can I start multiple servers on one
> single machine by just passing multiple ip and port? Or I will need to
> start
> each server on each machine separately? Thanks in advance!
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Multiple-servers-in-a-Ignite-Cluster-tp8840.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Problem with v for listening updates

2016-11-08 Thread Vladislav Pyatkov
Hi Andry,

It is look like a mistake:

cache.put(1, new TestVal("old"));
TestVal oldVal = cache.get(1);
oldVal.val = "new";
cache.put(1, oldVal);

You need create new object always.
Try to do like this:

cache.put(1, new TestVal("old"));
cache.put(1, new TestVal("new"));

Otherwise you can to get strange behavior, in particular if you are using
copyOnRead - false.

On Fri, Nov 4, 2016 at 4:03 PM, Andry <andrykoro...@gmail.com> wrote:

> Hi,
>
> We're trying to use Continuous Query to handle updates on existing object,
> but it looks like as oldValue in Update event is working incorrectly.
>
> Example of our usage:
>
> public class TestContinuousQueryUpdateEvent {
>
> private static class TestVal {
> public String val;
>
> public TestVal(String val) {
> this.val = val;
> }
>
> @Override
> public String toString() {
> return val;
> }
> }
>
> public static void main(String[] args) throws Exception {
> try (Ignite ignite =
> Ignition.start("examples/config/example-ignite.xml")) {
> try (IgniteCache<Integer, TestVal> cache =
> ignite.getOrCreateCache("test")) {
> ContinuousQuery<Integer, TestVal> qry = new
> ContinuousQuery<>();
>
> qry.setRemoteFilterFactory(FactoryBuilder.factoryOf(remoteFilter()));
>
> qry.setLocalListener(evts -> {
> for (CacheEntryEvent TestVal> e : evts)
> System.out.println("Local Listener: Event Type = "
> +
> e.getEventType() + ", Old val = " + e.getOldValue() + ", New val = " +
> e.getValue());
> });
>
> cache.query(qry);
>
> cache.put(1, new TestVal("old"));
>
> TestVal oldVal = cache.get(1);
> oldVal.val = "new";
>
> cache.put(1, oldVal);
>
> sleep(1000);
> } finally {
> ignite.destroyCache("test");
> }
> }
> }
>
> private static CacheEntryEventSerializableFilter<Integer, TestVal>
> remoteFilter() {
> return e -> true;
> }
> }
>
> Output:
> Local Listener: Event Type = CREATED, Old val = null, New val = old
> Local Listener: Event Type = UPDATED, Old val = new, New val = new
>
> We expected that oldValue will return actually saved object rather then
> reference to object we have modified in our code.
>
> Please advise how should we deal with this.
>
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Problem-with-v-for-listening-updates-tp8709.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Using OFF_HEAP_TIERED and Replicated Heap continously grows eventually heap crash

2016-11-08 Thread Vladislav Pyatkov
Hi,

Yes, you are right, if you decrease the property (sqlOnheapRowCacheSize),
you get sacrificing performance.

But default value  is really great:

public static final int DFLT_SQL_ONHEAP_ROW_CACHE_SIZE = 10 * 1024;

You can try to analyse heap dump, in order to make sure in the reason of
the issue.

On Tue, Nov 1, 2016 at 7:19 PM, styriver <scott_tyri...@mgic.com> wrote:

> It seems if I set this number to property to 1 can't set it to zero then
> the
> heap size does not grow. I am assuming we are sacrificing performance for
> this. Is there still an issue with Ignite code or is this desired behavior.
>
> 
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Using-OFF-HEAP-TIERED-and-Replicated-
> Heap-continously-grows-eventually-heap-crash-tp8604p8649.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Async messaging

2016-11-07 Thread Vladislav Pyatkov
Methods which support asynchronous execution shall be marked with
@IgniteAsyncSupported, but the method
(o.a.i.IgniteMessaging#send(java.lang.Object, java.lang.Object)) does not
have the matched annotation.

You can see the convention in the article [1].

[1]:
http://apacheignite.gridgain.org/docs/async-support#igniteasyncsupported

On Tue, Nov 8, 2016 at 9:22 AM, Andrey Kornev <andrewkor...@hotmail.com>
wrote:

> Daniel,
> A couple of things:
>
> - according to javadocs, IgniteMessaging.send() returns void. To obtain
> the future you should instead call IgniteMessaging.future() immediately
> following the send() call.
>
> - in some cases, Ignite's 'async' operations are actually synchronous.
> Ignite's interpretation of 'asynchronous' is quite unorthodox and the users
> should be aware of possible deadlocks and thread starvation, among other
> things.
>
> Regards,
> Andrey
> _
> From: Daniel Stieglitz <dstiegl...@stainlesscode.com>
> Sent: Saturday, November 5, 2016 1:47 PM
> Subject: Async messaging
> To: <user@ignite.apache.org>
>
>
>
> Hi folks:
>
> We are trying to get messaging working asynchronously. Take a look at the
> code below:
>
> IgniteMessaging rmtMsg = grid.message().withAsync();
>
> def future = rmtMsg.send(destination, message);
>
> log.debug("got future ${future}");
>
>
> That code runs synchronously and the debug says "got future null"
>
> Is there something else we need to configure to get this working?
>
>
> Dan
>
>
>
>


-- 
Vladislav Pyatkov


Re: Question: When to use CacheConfiguration.setIndexedTypes()?

2016-11-07 Thread Vladislav Pyatkov
Hi,
Please do not call the community as support.

The method (.setIndexedTypes())  was destinated for determine class of keys
and values, in order to extract annotation (@QuerySqlField) from them[1].

Another way you can use QueryEntry approach[2] and assign indexies directly
in config or code.

[1]:
http://apacheignite.gridgain.org/docs/sql-queries#configuring-sql-indexes-by-annotations
[2]:
http://apacheignite.gridgain.org/docs/sql-queries#configuring-sql-indexes-using-queryentity

On Sun, Nov 6, 2016 at 6:35 PM, techbysample <tu...@netmille.com> wrote:

> Support,
>
> In review of Ignite Documentation (1.7.0) , I have the following questions
> about the proper use of 'CacheConfiguration.setIndexedTypes'
>
> In the 'ignite-examples' that come with distribution, the 'Person' class is
> annotated
> as follows:
>
>//Person's annotations
> /** Person ID (indexed). */
> @QuerySqlField(index = true)
> public Long id;
>
> /** Organization ID (indexed). */
> @QuerySqlField(index = true)
> public Long orgId;
>
> /** Salary (indexed). */
> @QuerySqlField(index = true)
> public double salary;
>
>   In the 'Person' class, 3 fields are  annotated (@QuerySqlField(index =
> true))
>   indexed: 'id', 'orgId', and 'salary'.
>
>  However, in the 'CacheQueryExample' class, it's main method contains:
>
> personCacheCfg.setIndexedTypes(AffinityKey.class,
> Person.class);
>
>
> Question:
> Will you explain when it is necessary for
> CacheConfiguration.setIndexedTypes()
> to be applied for  respective fields annotated with @QuerySqlField(index =
> true)?
>
> IE: Why  isn't 'salary' used in CacheConfiguration.setIndexedTypes()?
>
>
> Please advise.
>
> Regards,
> techbysample
>
>
>
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Question-When-to-use-CacheConfiguration-
> setIndexedTypes-tp8721.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Async messaging

2016-11-06 Thread Vladislav Pyatkov
Look at the article about asynchronous support of Ignite[1].
You are need to use method .future(), synchronous result be always null at
this case.

[1]: http://apacheignite.gridgain.org/docs/async-support

On Sat, Nov 5, 2016 at 11:47 PM, Daniel Stieglitz <
dstiegl...@stainlesscode.com> wrote:

> Hi folks:
>
> We are trying to get messaging working asynchronously. Take a look at the
> code below:
>
> IgniteMessaging rmtMsg = grid.message().withAsync();
>
> def future = rmtMsg.send(destination, message);
>
> log.debug("got future ${future}");
>
>
> That code runs synchronously and the debug says "got future null"
>
> Is there something else we need to configure to get this working?
>
>
> Dan
>
>


-- 
Vladislav Pyatkov


Re: Killing a node under load stalls the grid with ignite 1.7

2016-11-06 Thread Vladislav Pyatkov
Hi,

It should not be in CacheStore implementation, but i if you does not want
re-write logic, do it asynchronously.

On Thu, Nov 3, 2016 at 6:35 PM, bintisepaha <binti.sep...@tudor.com> wrote:

> the problem is when I am in write behind for order, how do I access the
> trade
> object. its only present in the cache. at that time I need access trade
> cache and that is causing issues.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Killing-a-node-under-load-stalls-the-
> grid-with-ignite-1-7-tp8130p8695.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Question about Class Not found

2016-11-03 Thread Vladislav Pyatkov
Hi,

I can only guess which class in was not found, but I think you have not
loaded jar on each nodes.
You have several tricks:
1) Copy jar into classpath on each nodes
2) Turn on peertopeer class loader[1].
3) Use query entry [2], without field annotation and method setIndexTypes

[1]: o.a.i.configuration.IgniteConfiguration#setPeerClassLoadingEnabled
[2]:
https://apacheignite.readme.io/docs/sql-queries#configuring-sql-indexes-using-queryentity

On Thu, Nov 3, 2016 at 2:29 PM, devis76 <devis.balse...@flexvalley.com>
wrote:

> Hi,
> i have a simple Pojo
>
> public class Client implements Serializable
> private static final long serialVersionUID = 2747772722162744232L;
> @QuerySqlField
> private final Date registrationDate;
> @QuerySqlField(index = true)
> private final String lwersion;
> ...
>
> I have create simple Cache<String,Client>...
> cacheCfg.setBackups(0);
> cacheCfg.setName(ctx.name());
> cacheCfg.setCacheMode(PARTITIONED);
>
>
> When i use a Query
>
> QueryCursor<List?>> qryx = cache
>
>.query(new SqlFieldsQuery("select * from " +
> valueType.getSimpleName()));
> List list = new ArrayList();
> for (List row : qryx) {
> logger.trace("Query {} List size {}",
> cache.getName(), row.size());
> logger.trace("Query {} List size {} {}",
> cache.getName(), row.get(0),
> row.get(1));
> logger.trace("Query Cache  {} Key found {}
> {}", cache.getName(),
> row.get(0));
> list.add((V) row.get(1));
> }
> qryx.close();
> return list;
>
> When i run this query with a single node all works fine.
>
> When i'll start second Karaf the "last node started" throw a Class Not
> Found
> have you any suggestion please?
>
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Question-about-Class-Not-found-tp8685.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: DataStreamer is closed

2016-11-03 Thread Vladislav Pyatkov
teCache<String, Person> cache = CacheManager.getCache();
>>>>
>>>> if (CollectionUtils.isNotEmpty(request.getPersons())){
>>>> String id = null;
>>>> for (Person ib : request.getPersons()){
>>>> if (StringUtils.isNotBlank(ib.getId())){
>>>> id = ib.getId();
>>>> if (null != ib.isDeleted() && Boolean.TRUE.equals(ib.isDeleted())){
>>>> cache.remove(id);
>>>> }else {
>>>> // no need to store the id. so setting null.
>>>> ib.setId(null);
>>>> entries.put(id, ib);
>>>> }
>>>> }
>>>> }
>>>> }else {
>>>>
>>>> }
>>>> }catch (Exception ex){
>>>> logger.error("Error while updating the cache - {} {} " ,msg, ex);
>>>> }
>>>>
>>>> return entries;
>>>> }
>>>> });
>>>>
>>>>kafkaStreamer.start();
>>>> }catch (Exception ex){
>>>> logger.error("Error in kafka data streamer ", ex);
>>>> }
>>>>
>>>>
>>>> Please let me know if you see any issues. thanks.
>>>>
>>>> On 3 November 2016 at 15:59, Anton Vinogradov <avinogra...@gridgain.com
>>>> > wrote:
>>>>
>>>>> Anil,
>>>>>
>>>>> Could you provide getStreamer() code and full logs?
>>>>> Possible, ignite node was disconnected and this cause DataStreamer
>>>>> closure.
>>>>>
>>>>> On Thu, Nov 3, 2016 at 1:17 PM, Anil <anilk...@gmail.com> wrote:
>>>>>
>>>>>> HI,
>>>>>>
>>>>>> I have created custom kafka data streamer for my use case and i see
>>>>>> following exception.
>>>>>>
>>>>>> java.lang.IllegalStateException: Data streamer has been closed.
>>>>>> at org.apache.ignite.internal.pro
>>>>>> cessors.datastreamer.DataStreamerImpl.enterBusy(DataStreamer
>>>>>> Impl.java:360)
>>>>>> at org.apache.ignite.internal.pro
>>>>>> cessors.datastreamer.DataStreamerImpl.addData(DataStreamerIm
>>>>>> pl.java:507)
>>>>>> at org.apache.ignite.internal.pro
>>>>>> cessors.datastreamer.DataStreamerImpl.addData(DataStreamerIm
>>>>>> pl.java:498)
>>>>>> at net.juniper.cs.cache.KafkaCach
>>>>>> eDataStreamer.addMessage(KafkaCacheDataStreamer.java:128)
>>>>>> at net.juniper.cs.cache.KafkaCach
>>>>>> eDataStreamer$1.run(KafkaCacheDataStreamer.java:176)
>>>>>> at java.util.concurrent.Executors
>>>>>> $RunnableAdapter.call(Executors.java:511)
>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>>>> at java.util.concurrent.ThreadPoo
>>>>>> lExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>>>> at java.util.concurrent.ThreadPoo
>>>>>> lExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>>>> at java.lang.Thread.run(Thread.java:745)
>>>>>>
>>>>>>
>>>>>>
>>>>>> addMessage method is
>>>>>>
>>>>>>  @Override
>>>>>> protected void addMessage(T msg) {
>>>>>> if (getMultipleTupleExtractor() == null){
>>>>>> Map.Entry<K, V> e = getSingleTupleExtractor().extr
>>>>>> act(msg);
>>>>>>
>>>>>> if (e != null)
>>>>>> getStreamer().addData(e);
>>>>>>
>>>>>> } else {
>>>>>> Map<K, V> m = getMultipleTupleExtractor().extract(msg);
>>>>>> if (m != null && !m.isEmpty()){
>>>>>> getStreamer().addData(m);
>>>>>> }
>>>>>> }
>>>>>> }
>>>>>>
>>>>>>
>>>>>> Do you see any issue ? Please let me know if you need any additional
>>>>>> information. thanks.
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>


-- 
Vladislav Pyatkov


Re: org.h2.api.JavaObjectSerializer not found

2016-10-31 Thread Vladislav Pyatkov
Hi,

Seem like h2 dependency have not been loaded.
Is this exception occurred after, that manifest-file been changed[1]?
I think, the exception bound with OSGi bundle was incorrect.

[1]:
http://apache-ignite-users.70518.x6.nabble.com/KARAF-4-6-4-8-Snapshot-IgniteAbstractOsgiContextActivator-tc8552.html

On Thu, Oct 27, 2016 at 3:26 PM, flexvalley <devis.balse...@flexvalley.com>
wrote:

> Sorry i'm new with this forum... i post without sub before
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/org-h2-api-JavaObjectSerializer-not-
> found-tp8538p8553.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Killing a node under load stalls the grid with ignite 1.7

2016-10-31 Thread Vladislav Pyatkov
Hi,

I mean, If you need create Order entry before Trade, you can to do it in
CacheStore implementation, but do not use IgniteCache for this. Just write
inserts for both tables.

Why this way did not matched?

On Mon, Oct 31, 2016 at 4:55 PM, bintisepaha  wrote:

> Hi Vladislav,
>
> what you are describing above is not clear to me at all?
> Could you please elaborate?
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Killing-a-node-under-load-stalls-the-
> grid-with-ignite-1-7-tp8130p8630.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Killing a node under load stalls the grid with ignite 1.7

2016-10-31 Thread Vladislav Pyatkov
Hi,

You need to write into "write behind handler" to database only (you can
fill several table, if it needed, for example "Order" than "Trade").
Cache which has "read through" on table "Trade" will always read value from
database, until cache entry does not exist.

On Thu, Oct 27, 2016 at 6:03 PM, bintisepaha <binti.sep...@tudor.com> wrote:

> yes I think you are write. Is there any setting that we can use in write
> behind that will not lock the entries?
> the use case is we have is like this
>
> Parent table - Order (Order Cache)
> Child Table - Trade (Trade Cache)
>
> We only have write behind on Order Cache and when writing that we write
> order and trade table both. so we query trade cache from order cache store
> writeAll() which is causing the above issue. We need to do this because we
> cannot write trade in the database without writing order. Foreign key
> constraints and data-integrity.
>
> Do you have any recommendations to solve this problem? We cannot use
> write-through. How do we make sure 2 tables are written in an order if they
> are in separate caches?
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Killing-a-node-under-load-stalls-the-
> grid-with-ignite-1-7-tp8130p8557.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: IN Query

2016-10-31 Thread Vladislav Pyatkov
Hi Anil,

This might be PreparedStatement restriction.
In that case you need to generate query by hand.
Look at the stackoverflow[1].

[1]:
http://stackoverflow.com/questions/3107044/preparedstatement-with-list-of-parameters-in-a-in-clause

On Sat, Oct 29, 2016 at 4:44 PM, Anil <anilk...@gmail.com> wrote:

> second try.
>
> On 28 October 2016 at 15:24, Anil <anilk...@gmail.com> wrote:
>
>> Any inputs please ?
>>
>> On 28 October 2016 at 09:27, Anil <anilk...@gmail.com> wrote:
>>
>>> Hi Val,
>>>
>>> the below one is multiple IN queries with AND but not OR. correct ?
>>>
>>> SqlQuery with join table worked for IN Query and the following prepared
>>> statement is not working.
>>>
>>> List inParameter = new ArrayList<>();
>>> inParameter.add("8446ddce-5b40-11e6-85f9-005056a90879");
>>> inParameter.add("f5822409-5b40-11e6-ae7c-005056a91276");
>>> inParameter.add("9f445a19-5b40-11e6-ab1a-005056a95c7a");
>>> inParameter.add("fd12c96f-5b40-11e6-83f6-005056a947e8");
>>> PreparedStatement statement = conn.prepareStatement("SELECT  p.name
>>> FROM Person p join table(joinId VARCHAR(25) = ?) k on p.id = k.joinId");
>>> statement.setObject(1, inParameter.toArray());
>>> ResultSet rs = statement.executeQuery();
>>>
>>> Thanks for your help.
>>>
>>
>>
>


-- 
Vladislav Pyatkov


Re: java.lang.IllegalStateException: Failed to create data streamer (grid is stopping).

2016-10-31 Thread Vladislav Pyatkov
Hi Bob,

This message says about the instance of Ignite was stopped before
cache.loadAll was invoked.
Check it using Ignition.state() in code.

On Mon, Oct 31, 2016 at 5:58 AM, 胡永亮/Bob <hu...@neusoft.com> wrote:

> Hi
>
> I am using Ignite 1.6.
> I meet the exception as the mail title when I call
> cache.loadAll(keys, true, null);
> And this Exception is not logged.
> I found this through debugging.
>
> Actually, the ignite cluster is running.
>
> Can anyone tell what is the possible reason?
> Thank you.
>
> --
> Bob
>
> 
> ---
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
> --------
> ---
>



-- 
Vladislav Pyatkov


Re: Does read-through work with replicated caches ?

2016-10-27 Thread Vladislav Pyatkov
Hi,

It will be working, because replicated cache implemented as partition with
number of backups equivalent to number of nodes.

On Thu, Oct 27, 2016 at 2:13 PM, Kristian Rosenvold <krosenv...@apache.org>
wrote:

> Does this configuration actually do anything with a replicated cache, if
> so what does it do ?
>
> Kristian
>
>


-- 
Vladislav Pyatkov


Re: How to avoid data skew in collocate data with data

2016-10-27 Thread Vladislav Pyatkov
Hi,

One AffinityKey binding to one node always. This is AffinityKey meaning
(all entries with one AffinityKey stored into one node).

On Thu, Oct 27, 2016 at 9:45 AM, ght230 <ght...@163.com> wrote:

> If there are too many data related to one affinity key, even more than the
> capacity of one node.
>
> Will Ignite automatically split that data and stored them in several nodes?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-to-avoid-data-skew-in-collocate-
> data-with-data-tp8454p8539.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: How to avoid data skew in collocate data with data

2016-10-26 Thread Vladislav Pyatkov
Hi,

No, it will not help. Because affinity function have not information about
data volume into partition.
The purpose of the function (RendezvousAffinityFunction or
FairAffinityFunction) is distribute evenly partition quantity between nodes
(but not data). You can to deeper understand affinity contract if lock at
it's interface[1].

[1]:
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/cache/affinity/AffinityFunction.java

On Wed, Oct 26, 2016 at 5:57 PM, ght230 <ght...@163.com> wrote:

> For some reason, I can not choose more selective affinty key than current.
>
> If I increase number of partition, will it help me to avoid the case that
> keys 553, 551, 554, 550 hitting into one node?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-to-avoid-data-skew-in-collocate-
> data-with-data-tp8454p8512.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: [EXTERNAL] Re: SLF4J AND LOG4J delegation exception with ignite dependency

2016-10-26 Thread Vladislav Pyatkov
rotocol.java:72)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:264)
> at
> org.apache.ignite.internal.processors.rest.GridRestProcessor.
> startHttpProtocol(GridRestProcessor.java:831)
> at
> org.apache.ignite.internal.processors.rest.GridRestProcessor.start(
> GridRestProcessor.java:451)
> at
> org.apache.ignite.internal.IgniteKernal.startProcessor(
> IgniteKernal.java:1549)
> at org.apache.ignite.internal.IgniteKernal.start(
> IgniteKernal.java:876)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(
> IgnitionEx.java:1736)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(
> IgnitionEx.java:1589)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.
> java:1042)
> at
> org.apache.ignite.internal.IgnitionEx.startConfigurations(
> IgnitionEx.java:964)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:850)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:749)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:619)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:589)
> at org.apache.ignite.Ignition.start(Ignition.java:347)
> at com.boot.NodeStartup.main(NodeStartup.java:21)
>
> -
> 2. If I include ignite-indexing dependency (h2 is available in class path),
> I get below exception -
>
> java.lang.NoClassDefFoundError: org/h2/constant/SysProperties
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.start(
> IgniteH2Indexing.java:1487)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.start(
> GridQueryProcessor.java:171)
> at
> org.apache.ignite.internal.IgniteKernal.startProcessor(
> IgniteKernal.java:1549)
> at org.apache.ignite.internal.IgniteKernal.start(
> IgniteKernal.java:869)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(
> IgnitionEx.java:1736)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(
> IgnitionEx.java:1589)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.
> java:1042)
> at
> org.apache.ignite.internal.IgnitionEx.startConfigurations(
> IgnitionEx.java:964)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:850)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:749)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:619)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:589)
> at org.apache.ignite.Ignition.start(Ignition.java:347)
> at com.boot.NodeStartup.main(NodeStartup.java:21)
> Caused by: java.lang.ClassNotFoundException: org.h2.constant.SysProperties
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 14 more
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/SLF4J-AND-LOG4J-delegation-exception-
> with-ignite-dependency-tp8415p8425.html
>
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
>
>
>
>
> --
>
> Vladislav Pyatkov
>
>
> --
>
> *If you reply to this email, your message will be added to the discussion
> below:*
>
> http://apache-ignite-users.70518.x6.nabble.com/SLF4J-AND-
> LOG4J-delegation-exception-with-ignite-dependency-tp8415p8443.html
>
> To unsubscribe from SLF4J AND LOG4J delegation exception with ignite
> dependency, click here.
> NAML
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
> --
> View this message in context: Re: [EXTERNAL] Re: SLF4J AND LOG4J
> delegation exception with ignite dependency
> <http://apache-ignite-users.70518.x6.nabble.com/SLF4J-AND-LOG4J-delegation-exception-with-ignite-dependency-tp8415p8444.html>
>
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: How to avoid data skew in collocate data with data

2016-10-26 Thread Vladislav Pyatkov
Hi,

You can increase number of partition through affinity function
configuration, but default value (1024 partitions) would be enough.


...

  
   

  

I do not shure than FairAffinityFunction will help to you, in case if keys
553, 551, 554, 550 will hit into one node.

How about choose more selective affinty key than current?

On Wed, Oct 26, 2016 at 5:16 AM, ght230 <ght...@163.com> wrote:

> How to increase number of partitions?
>
> And if data skew happened, how can I rebalance it?
> Using FairAffinityFunction()?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-to-avoid-data-skew-in-collocate-
> data-with-data-tp8454p8491.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Evicted entry appears in Write-behind cache

2016-10-26 Thread Vladislav Pyatkov
Hi,

Yes, write behind buffer gets locks on entries which stores between loading
to CacheStore. So that this entries cannot be evicted.

If "get" invocation can not find value in cache, it will try to get value
from CacheStore.

On Fri, Oct 14, 2016 at 6:55 PM, Pradeep Badiger <pradeepbadi...@fico.com>
wrote:

> I have some log statements in the load() which gets called when there is
> no entry in the cache or write behind buffer. What I learnt is that there
> is a write behind buffer that holds on to the entries (even the evicted
> ones) and whenever there is a get() for an entry, ignite looks at both the
> cache and the write behind buffer and loads it. If it doesn’t find it in
> any of the two, it then uses the read through mechanism to load the entry
> from the store.
>
>
>
> Thanks,
>
> Pradeep V.B.
>
>
>
> *From:* Vladislav Pyatkov [mailto:vpyat...@gridgain.com]
> *Sent:* Friday, October 14, 2016 4:25 AM
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Evicted entry appears in Write-behind cache
>
>
>
> Hi Paradeep,
>
>
>
> Why are you think, what the entry could not be read through your
> persistence storage:
>
>
>
> *.setReadThrough(stateQoS.isReadThroughEnabled()) *
>
>
>
> When cache can not get data from in memory, it will try to get data from
> storage, which configured as "ReadThrough".
>
>
>
> On Thu, Oct 13, 2016 at 6:41 PM, Pradeep Badiger <pradeepbadi...@fico.com>
> wrote:
>
> Hi Vladislav,
>
>
>
> Please see the below cache configuration.
>
>
>
>  LruEvictionPolicy
> evictionPolicy = new LruEvictionPolicy<>(getIntProperty(envConfig,
> CACHE_SIZE, 1));
>
>  cacheConfiguration
>
>
> .setEvictionPolicy(evictionPolicy)
>
>
> .setWriteBehindFlushSize(getIntProperty(envConfig, CACHE_WB_FLUSH_SIZE,
> 0))
>
>
> .setWriteBehindBatchSize(getIntProperty(envConfig, CACHE_WB_BATCH_SIZE,
> 200))
>
>
> .setWriteBehindEnabled(stateQoS.isWriteBehindEnabled())
>
>
> .setWriteBehindFlushFrequency(getIntProperty(envConfig,
> CACHE_WB_FLUSH_FREQ_MS, 5000))
>
> .
> setWriteBehindFlushThreadCount(getIntProperty(envConfig,
> CACHE_WB_FLUSH_THREADS, 10))
>
>
> .setCacheStoreFactory(new StateCacheStoreFactory<K, V>(cacheName,
>
>
> storageManager))
>
>
> .setName(cacheName)
>
>
> .setReadThrough(stateQoS.isReadThroughEnabled())
>
>
> .setWriteThrough(stateQoS.isWriteThroughEnabled());
>
>
>
> Thanks,
>
> Pradeep V.B.
>
>
>
> *From:* Vladislav Pyatkov [mailto:vldpyat...@gmail.com]
> *Sent:* Wednesday, October 12, 2016 2:59 AM
>
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Evicted entry appears in Write-behind cache
>
>
>
> Hi Pradeep,
>
>
>
> Could you please provide cache configuration?
>
>
>
> On Tue, Oct 11, 2016 at 6:57 PM, Denis Magda <dma...@gridgain.com> wrote:
>
> Looks like that my initial understanding was wrong. There is a related
> discussion
>
> http://apache-ignite-users.70518.x6.nabble.com/Cache-
> read-through-with-expiry-policy-td2521.html
>
>
>
> —
>
> Denis
>
>
>
> On Oct 11, 2016, at 8:55 AM, Pradeep Badiger <pradeepbadi...@fico.com>
> wrote:
>
>
>
> Hi Denis,
>
>
>
> I did the get() on the evicted entry from the cache, it still returned me
> the value without calling the load() on the store. As you said, the entry
> would be cached in the write behind store even for the evicted entry. Is
> that true?
>
>
>
> Thanks,
>
> Pradeep V.B.
>
> *From:* Denis Magda [mailto:dma...@gridgain.com <dma...@gridgain.com>]
> *Sent:* Monday, October 10, 2016 9:13 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Evicted entry appears in Write-behind cache
>
>
>
> Hi,
>
>
>
> How do you see that the evicted entries are still in the cache? If you
> check this by calling cache get like operations then entries can be loaded
> back from the write-behind store or from your underlying store.
>
>
>
> —
>
> Denis
>
>
>
> On Oct 8, 2016, at 1:00 PM, Pradeep Badiger <pradeepbadi...@fico.com>
> wrote:
>
>
>
> Hi,
>
>
>
> I am trying to evaluate Apache Ignite and trying to explore eviction
> policy and write behind features. I am seeing that whenever a cache is
> configured with eviction policy and write behind feature, the write behind
> cache always have all the changed entries including the ones that are
> evicted, befo

Re: Killing a node under load stalls the grid with ignite 1.7

2016-10-25 Thread Vladislav Pyatkov
Hi,

Incorrect implementation of CacheStore is the most probable reason, because
stored entry is locked. You need to avoid lock one entry in othe.

Necessary re-write the code and re-check, I think, the issue will resolved.

On Tue, Oct 25, 2016 at 1:05 AM, bintisepaha <binti.sep...@tudor.com> wrote:

> Hi, actually we use a lot of caches from cache store writeAll().
> For confirming if that is the cause of the grid stall, we would have to
> completely change our design.
>
> Can someone confirm that this is the cause for grid to stall? referencing
> cache.get from a cache store and then killing or bringing up nodes leads to
> a stall?
>
> We see a node blocked on flusher thread while doing a cache.get() when the
> grid is stalled, if we kill that node, the grid starts functioning. But we
> would like to understand are we using write behind incorrectly or there are
> some settings that we can use to re-balance or write-behind that might save
> us from something like this.
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Killing-a-node-under-load-stalls-the-
> grid-with-ignite-1-7-tp8130p8449.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: How to avoid data skew in collocate data with data

2016-10-25 Thread Vladislav Pyatkov
Hi,

I agree with Alexey, you need to increase number of partitions.

In additional, It look like your Affinity key has bad selectivity.
Why so mach data bind to  553, 551, but small data bind to 542, 530?

I recomend select other key as affinity or accept that disbalance.

On Tue, Oct 25, 2016 at 4:24 PM, Alexey Kuznetsov <akuznet...@apache.org>
wrote:

> Hi!
>
> How many partitions configured in your cache?
> As far as I see - 11 partitions?
> Could you try to configure more (64, 128, 256)?
> And see how data will be distributed?
>
> By default Ignite caches configured with 1024 partitions.
>
>
>
> On Tue, Oct 25, 2016 at 8:20 PM, ght230 <ght...@163.com> wrote:
>
>> When data skew happened, what can I do to rebalance all the data to the 3
>> nodes.
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/How-to-avoid-data-skew-in-collocate-data-
>> with-data-tp8454p8473.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> Alexey Kuznetsov
>



-- 
Vladislav Pyatkov


Re: Cant events listen from client Node

2016-10-25 Thread Vladislav Pyatkov
Hi

Could you please provide source code as example?

On Tue, Oct 25, 2016 at 4:18 PM, Labard <daiva...@at-consulting.ru> wrote:

> I have already enabled this event into configuration
>
>class="org.apache.ignite.configuration.IgniteConfiguration">
>
> 
>
> 
>
> 
>  static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT"/>
> 
>
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
>
> ru.at_consulting.dmp.vip.georgia.ignite.CTL_File.Key
>
> ru.at_consulting.dmp.vip.georgia.ignite.CTL_File
> 
> 
> 
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
>
> ru.at_consulting.dmp.vip.georgia.ignite.CTL_File_Type.Key
>
> ru.at_consulting.dmp.vip.georgia.ignite.CTL_File_Type
> 
> 
> 
> 
> 
>
> this work for server node but still does not work for client node.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Cant-events-listen-from-client-Node-tp8470p8472.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Cant events listen from client Node

2016-10-25 Thread Vladislav Pyatkov
Hi,

If you want to handle EVT_CACHE_OBJECT_PUT from a client node, you need to
enable the event into configuration:


...



...




For additional information look at the article[1].

[1]: https://apacheignite.readme.io/docs/events

On Tue, Oct 25, 2016 at 2:48 PM, Labard <daiva...@at-consulting.ru> wrote:

> Hello
> I have Ignite cluster and one client node.
> I want to listen "put into cache" events in my client node, but something
> does not work.
> If I use server node instead (just remove client property from config) it
> works exellent.
>
> My listener code:
>
> ignite.events(ignite.cluster().forCacheNodes(CTL_FILES)).
> remoteListen((uuid,
> evt) -> {
> if (evt.cacheName().equals(CTL_FILES)) {
> final CTL_File value = (CTL_File) evt.newValue();
> if (value.getFileStatus().equals(TaskStatus.NEW)) {
> loadFile(value.getFileName(), count++,
> value.getFileTypeName() + "_TOPIC", ERROR_TOPIC);
> }
> }
> return true;
> }, (IgnitePredicate) cacheEvent ->
> cacheEvent.cacheName().equals(CTL_FILES), EventType.EVT_CACHE_OBJECT_PUT);
>
> What's the problem?
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Cant-events-listen-from-client-Node-tp8470.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Ignite client thread amount control

2016-10-25 Thread Vladislav Pyatkov
Not, it is not. One instance of Ignite client can to support several
parallel queries.
You can use client like this:

*Ignition.setClientMode(true);*
*Ignite ignite = Ignitin.strt(cfg)*
*threads[i] = new Thread() {*
*@Override public void run() {*
* try (QueryCursor cursor = ignite.cache(name).query(new
SqlFieldsQuery(sql))) {*
* for (Object obj : cursor) {*
* // Ops.*
* }*
* }*
*}*
*};*

*for (Thread thread : threads)*
* thread.join();*

On Tue, Oct 25, 2016 at 6:12 AM, Jeff Jiao <jeffjiaoyim...@gmail.com> wrote:

> Hi vkulichenko,
>
> Thanks for the reply! I already subscribed.
>
> What if I have multiple users query at the same time? One user hold the
> Ignite client and the others just wait?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-client-thread-amount-control-tp8434p8455.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: SLF4J AND LOG4J delegation exception with ignite dependency

2016-10-24 Thread Vladislav Pyatkov
Hi,

1) You can exclude slf4j-log4j12 dependency from ignite-rest-http.
Like this:

compile ('org.apache.ignite:ignite-rest-http:1.6.0') {
exclude group: "org.slf4j", name: "slf4j-log4j12"
  }

2) Ignite 1.6 supports h2 1.3 version. You need to use last version 1.7 of
Ignite, with support h2 1.4.

On Sat, Oct 22, 2016 at 11:58 AM, chevy <chetan.v.ya...@target.com> wrote:

> 1. Now I am getting below exception. Saw in one of the threads that
> removing
> ignite-rest-http will solve the issue which it does. But I need to include
> rest-api as I will be using ignite rest services. Please help me fix this.
>
> java.lang.NoSuchMethodError:
> org.eclipse.jetty.util.log.StdErrLog.setProperties(Ljava/
> util/Properties;)V
> at
> org.apache.ignite.internal.processors.rest.protocols.http.jetty.
> GridJettyRestProtocol.(GridJettyRestProtocol.java:72)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:264)
> at
> org.apache.ignite.internal.processors.rest.GridRestProcessor.
> startHttpProtocol(GridRestProcessor.java:831)
> at
> org.apache.ignite.internal.processors.rest.GridRestProcessor.start(
> GridRestProcessor.java:451)
> at
> org.apache.ignite.internal.IgniteKernal.startProcessor(
> IgniteKernal.java:1549)
> at org.apache.ignite.internal.IgniteKernal.start(
> IgniteKernal.java:876)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(
> IgnitionEx.java:1736)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(
> IgnitionEx.java:1589)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.
> java:1042)
> at
> org.apache.ignite.internal.IgnitionEx.startConfigurations(
> IgnitionEx.java:964)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:850)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:749)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:619)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:589)
> at org.apache.ignite.Ignition.start(Ignition.java:347)
> at com.boot.NodeStartup.main(NodeStartup.java:21)
>
> -
> 2. If I include ignite-indexing dependency (h2 is available in class path),
> I get below exception -
>
> java.lang.NoClassDefFoundError: org/h2/constant/SysProperties
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.start(
> IgniteH2Indexing.java:1487)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.start(
> GridQueryProcessor.java:171)
> at
> org.apache.ignite.internal.IgniteKernal.startProcessor(
> IgniteKernal.java:1549)
> at org.apache.ignite.internal.IgniteKernal.start(
> IgniteKernal.java:869)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(
> IgnitionEx.java:1736)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(
> IgnitionEx.java:1589)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.
> java:1042)
> at
> org.apache.ignite.internal.IgnitionEx.startConfigurations(
> IgnitionEx.java:964)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:850)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:749)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:619)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:589)
> at org.apache.ignite.Ignition.start(Ignition.java:347)
> at com.boot.NodeStartup.main(NodeStartup.java:21)
> Caused by: java.lang.ClassNotFoundException: org.h2.constant.SysProperties
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 14 more
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/SLF4J-AND-LOG4J-delegation-exception-
> with-ignite-dependency-tp8415p8425.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Why doesn't the index be used in my test?

2016-10-24 Thread Vladislav Pyatkov
Hi Bob,

If you put annotation on fields then need to use
"CacheConfiguration.setIndexedTypes". But for Query Entity, you must
discribe entiti in configuration (QueryEntity.setIndexes) without
annotation.

Please, look  at [1]

If it doesn't help, provide your query configuration.

[1]: https://apacheignite.readme.io/docs/sql-queries

On Mon, Oct 24, 2016 at 10:47 AM, 胡永亮/Bob <hu...@neusoft.com> wrote:

> Hi everyone,
>
> I have a model Kc21, akc273 is its one String column .
>
> I create the index in this column, as the following:
> @QuerySqlField(index = true)
> private String akc273;
>
> Then I load data into cache from oracle, total 47535542 rows.
>
> I execute the sql query to get the execute plan:
>
>
>
> *SqlFieldsQuery sql = new SqlFieldsQuery(
>   "explain select BKC231 from Kc21 where akc273 = '王妍'"); logger.info
> <http://logger.info>("execute plan:"+cache.query(sql).getAll());*
>
> The result is:
> *execute plan:[[SELECT*
>
>
>
>
>
>
>
> *BKC231 AS __C0FROM "Kc21Cache".KC21/* "Kc21Cache".KC21.__SCAN_ 
> */WHERE AKC273 = STRINGDECODE('\u738b\u598d')], [SELECT__C0 AS BKC231FROM 
> PUBLIC.__T0/* "Kc21Cache"."merge_scan" */]]
> *
>
> I think this tell me that the index is not used in this sql. Why?
> And the query time also very long as the time before creating this
> index.
>
> Thank your reply. ^V^
>
> Bob
>
> 
> ---
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
> 
> ---
>



-- 
Vladislav Pyatkov


Re: Data Streamer

2016-10-21 Thread Vladislav Pyatkov
Hi Anil,

This sounds is very questionable.
Could you please attach your sources?

On Fri, Oct 21, 2016 at 5:16 PM, Anil <anilk...@gmail.com> wrote:

> HI,
>
> I was loading data into ignite cache using parallel tasks by broadcasting
> the task. Each taks (IgniteCallable implementation) has its own data
> streamer. was it correct approach.
>
> Loading data into ignite cache using data streamer is very slow compared
> normal cache.put.
>
> is that expected ? or need to some configurations to improve the
> performance.
>
> Thanks.
>
>


-- 
Vladislav Pyatkov


Re: Killing a node under load stalls the grid with ignite 1.7

2016-10-21 Thread Vladislav Pyatkov
Hi,

Yes, please attach new dumps (without putting in cache into cache store).
That reduce search of reason.

On Fri, Oct 21, 2016 at 3:54 PM, bintisepaha <binti.sep...@tudor.com> wrote:

> This was done to optimize our writes to the DB. on every save, we do not
> want
> to delete and insert records, so we do a digest comparison. Do you think
> this causes an issue? How does cache store handle transactions or locks?
> when a node dies, if a flusher thread is doing write-behind how does that
> affect data rebalancing?
>
> If you could answer the above questions, it will give us more clarity.
>
> We are removing it now. but still killing a node is stalling the cluster.
> Will send the latest thread dumps to you today.
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Killing-a-node-under-load-stalls-the-
> grid-with-ignite-1-7-tp8130p8405.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: zero downtime while upgrade

2016-10-21 Thread Vladislav Pyatkov
Hi,

Unfortunately not, because changes between Ignite version often going with
feature of architecture.

GridGain supports rolling-upgardes[1], but this works for minor version
only.


[1]: https://gridgain.readme.io/docs/rolling-upgardes

On Fri, Oct 21, 2016 at 3:50 PM, Abhishek Jain <mail.abhishekj...@gmail.com>
wrote:

> Hi Folks,
>
> Does apache Ignite supports zero downtime while upgrading to new version ?
>
> Regards
> Abhishek
>



-- 
Vladislav Pyatkov


Re: One question about Partition-aware data loading

2016-10-21 Thread Vladislav Pyatkov
Hi Bob,

This not clear for me. Why do a list of columns has bad impact to
performance?
Ignite does not have specific pttern for existing CacheStore implementation
reuse, like everything else you can see from code of CacheJdbcPojoStore[1].

[1]:
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStore.java

On Fri, Oct 21, 2016 at 11:57 AM, 胡永亮/Bob <hu...@neusoft.com> wrote:

> Hi everyone,
>
> In official document, there are some code about Partition-aware data
> loading.
>
> private void loadPartition(Connection conn, int part,
> IgniteBiInClosure<Long, Person> clo) {
>
> try (PreparedStatement st = conn.prepareStatement("select * from PERSONS 
> where partId=?")) {
>   st.setInt(1, part);
>
>   try (ResultSet rs = st.executeQuery()) {
> while (rs.next()) {
>   *Person person = new Person(rs.getLong(1), rs.getString(2), 
> rs.getString(3));*
>
>   clo.apply(person.getId(), person);
> }
>   }
> }
> catch (SQLException e) {
>   throw new CacheLoaderException("Failed to load values from cache 
> store.", e);
> }
>   }
>
> I have a question in real scenario in the previous bold code: My table 
> like Person has 100 columns, so I will list so many colmuns, it is not very 
> efficient.
>
> But, in the default implemention of cache.loadCache(), there are good 
> code for mapping the DB table to cache object.
>
> Can I reuse these code through some API?
>
> Thanks your reply.
>
>
> --
> Bob
>
> 
> ---
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
> 
> ---
>



-- 
Vladislav Pyatkov


Re: Ignite metrics

2016-10-21 Thread Vladislav Pyatkov
Hi Anil,

The row (Non heap) does not contains information about cache. If you want
to see how many memory was used by particular cache.

You can switch up metrics for it:

* *
* *
...

and tracking a metrics of cache by API:

*ignite.cache("name").metrics().getOffHeapAllocatedSize()*

or by JMX bean.

On Fri, Oct 21, 2016 at 10:38 AM, Anil <anilk...@gmail.com> wrote:

> HI,
>
> i have loaded around 20 M records into 4 node ignite cluster.
>
> Following is the ignite metrics logged in the log of one node.
>
>
> ^-- Node [id=c0e3dc45, name=my-grid, uptime=20:17:27:096]
> ^-- H/N/C [hosts=4, nodes=4, CPUs=32]
> ^-- CPU [cur=0.23%, avg=0.26%, GC=0%]
> ^-- Heap [used=999MB, free=71.96%, comm=1819MB]
> ^-- Non heap [used=101MB, free=-1%, comm=105MB]
> ^-- Public thread pool [active=0, idle=16, qSize=0]
> ^-- System thread pool [active=0, idle=16, qSize=0]
> ^-- Outbound messages queue [size=0]
>
>
> Each node is consuming around 20 gb RAM (can see it from htop command).
>
> Ignite Configuration :
>
>  
> 
> 
> 
> 
> 
>
> From the log, non heap used is 101 MB and heap used is 999MB. But actual
> RAM used by jar is 20 GB.
>
> Can you please clarify the numbers ?
>
> Thanks
>



-- 
Vladislav Pyatkov


Re: Some problems in test case which comparing sql query performance between Ignite and Oracle

2016-10-20 Thread Vladislav Pyatkov
Hi Bob,

One way to do SQL faster this is adding indexes.
1) I do not think what the estimation will be a lot improve without index.
Because of the need to serialize, deserialize and move in network data.

2) Ignite does not create index on existing data, but you are always can
copy data to another cache (with index) and drop this. Community going to
implement adding indexes on alive cache, but for now it is not possible.

On Thu, Oct 20, 2016 at 12:10 PM, 胡永亮/Bob <hu...@neusoft.com> wrote:

> Hi, everyone
>
> My test environment: Ignite cluster has 8 nodes, every node has 8
> cores CPU and 30G memory. Their network has 1000M speed.
> Oracle is deployed in the machine which has 32G memory and  8 cores
> CPU.
>
> My db table has 47535542 rows with 99 columns.
>
> When no index, the cost time of sql: select * from Kc21 where akc273='
> 王妍'
> Oracle: 152s
>  Ignite:   61s
>
> After creating index in the field akc273:
> Oracle: 3s
>
> Problem 1:I think 61s is too long for this sql in Ignite, how can I
> increase the performance?
> Problem 2 :  How to create index in exsiting cache? Now I only find
> some annotations and configuration to create index before loading data.
>
> Thanks.
>
> --
> Bob
>
> 
> ---
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
> 
> ---
>



-- 
Vladislav Pyatkov


Re: JVM Crash - SIGSEGV on GridUnsafe.copyMemory

2016-10-19 Thread Vladislav Pyatkov
(JavaThread*, Thread*)+0xa0
> V  [libjvm.so+0xa68f3f]  JavaThread::thread_main_inner()+0xdf
> V  [libjvm.so+0xa6906c]  JavaThread::run()+0x11c
> V  [libjvm.so+0x91cb88]  java_start(Thread*)+0x108
> C  [libpthread.so.0+0x7aa1]
> hs_err_pid3410800.log
> <http://apache-ignite-users.70518.x6.nabble.com/file/
> n8356/hs_err_pid3410800.log>
> hs_err_pid3410096.log
> <http://apache-ignite-users.70518.x6.nabble.com/file/
> n8356/hs_err_pid3410096.log>
> hs_err_pid3408462.log
> <http://apache-ignite-users.70518.x6.nabble.com/file/
> n8356/hs_err_pid3408462.log>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/JVM-Crash-SIGSEGV-on-GridUnsafe-copyMemory-tp8356.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Killing a node under load stalls the grid with ignite 1.7

2016-10-18 Thread Vladislav Pyatkov
Hi,

I have just saw:

  at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1214)
  at
com.tudor.datagridI.server.writebehind.BasePersistentService.replaceDigest(BasePersistentService.java:302)
  at
com.tudor.datagridI.server.writebehind.JdbcTradeOrderPersistentService.replaceDigest(JdbcTradeOrderPersistentService.java:291)
  at
com.tudor.datagridI.server.writebehind.JdbcTradeOrderPersistentService.writeTradeOrders(JdbcTradeOrderPersistentService.java:74)
  at
com.tudor.datagridI.server.cachestore.springjdbc.TradeOrderCacheStore.writeAll(TradeOrderCacheStore.java:238)
  at
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.updateStore(GridCacheWriteBehindStore.java:685

Why are you update elements of cache in Cache Store?

On Tue, Oct 18, 2016 at 4:41 AM, bintisepaha <binti.sep...@tudor.com> wrote:

> This is a sample cache config. We have the same issue with on heap settings
> too.
> Do you need something else?
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
>
> 
> 
> 
> 
> 
> 
> 
> 
>
>
> 
> 
>
> 
> 
> 
> 
> 
> 
> 
>  value="FULL_SYNC" />
> 
> 
> 
>  class="org.apache.ignite.cache.QueryEntity">
>  
> value="com.tudor.datagridI.client.data.trading.OrderKey"
> />
>  value="com.tudor.datagridI.
> client.data.trading.TradeOrder" />
>
> 
> 
>  key="traderId" value="java.lang.Integer" />
>  key="orderId" value="java.lang.Integer" />
>  key="insIid" value="java.lang.Integer" />
>  key="settlement" value="java.util.Date" />
>  key="clearAgent" value="java.lang.String" />
>  key="strategy" value="java.lang.String" />
>  value="java.lang.Integer" />
>  key="pvDate" value="java.util.Date" />
>  key="linkId" value="java.lang.Integer" />
> 
> 
> 
> 
>  class="org.apache.ignite.cache.QueryIndex">
>
> 
>
> 
>
>   traderId
>
>   orderId
>
> 
>
> 
>
> 
>
> SORTED
>
> 
>  name="name" value="tradeOrder_key_index" />
> 
> 
> 
> 
>     
> 
> 
> 
> class="org.apache.ignite.cache.affinity.rendezvous.
> RendezvousAffinityFunction">
>  value="true" />
> 
> 
> 
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Killing-a-node-under-load-stalls-the-
> grid-with-ignite-1-7-tp8130p8334.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Loading Hbase data into Ignite

2016-10-18 Thread Vladislav Pyatkov
Hi Anil,

The implementation of IgniteCallable looks like very doubtful.
When you invoke "ignite.compute().call(calls)" all IgniteCallable will be
serialized and
sended to particular nodes on executing.

I have doubt about, QueryPlan serialized correctly.

You need to implement Callable as easy as possible (in additional you can
try implement Externalizable) and to create connection into IgniteCallable
directly.

On Tue, Oct 18, 2016 at 7:34 AM, Anil <anilk...@gmail.com> wrote:

> Hi Val,
>
> I have attached the sample program. please take a look and let me know if
> you have any questions.
>
> after spending some time, i noticed that the exception is happening only
> when processing of number of parallel callable's with broadcast.
>
> Thanks,
> Anil
>
> On 15 October 2016 at 04:33, vkulichenko <valentin.kuliche...@gmail.com>
> wrote:
>
>> Hi Anil,
>>
>> Yes, the exception doesn't tell much. It would be great if you provide a
>> test that reproduces the issue.
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Loading-Hbase-data-into-Ignite-tp8209p8308.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


-- 
Vladislav Pyatkov


Re: sample code for customised Partition logic

2016-10-17 Thread Vladislav Pyatkov
Hi,

It is look like correct implementation.
How do you come to conclusion, which something wrong?

On Mon, Oct 17, 2016 at 6:29 AM, minisoft_rm <minisoft...@hotmail.com>
wrote:

> Hi Val,
> the reason is that "assignPartitions()" returns same
> "List<List>" usually even I start two ignite nodes.
>
> last Friday, I refactor it as below:[
>
> @Override
> public List<List> assignPartitions(final
> AffinityFunctionContext affCtx)
> {
> final List nodes = affCtx.
> currentTopologySnapshot();
> int tmpi = 0;
> final List<List> assignments = new
> ArrayList<>(2);
> for (int i = 0; i < 2; i++)
> {
> final List partAssignment = new
> ArrayList();
>
> partAssignment.add(nodes.get(tmpi));
>
> if (tmpi < nodes.size() - 1)
> {
> tmpi++;
> }
>
> assignments.add(partAssignment);
>
> }
> return assignments;
> }
> ]
>
>  and it return result like:
> list[0] -> node1;
> list[1] -> node2.
>
> it is what I want.
>
> I don't think the original "assignPartitions()" has bug. (there are
> lots
> of people use it without error, right?)
> but why "assignPartitions()" returns to me the same node# List ?
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/sample-code-for-customised-Partition-
> logic-tp8270p8317.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Killing a node under load stalls the grid with ignite 1.7

2016-10-14 Thread Vladislav Pyatkov
Hi Binti,

I can not reproduce this issue.
Could you please provide cache configuration?

On Thu, Oct 13, 2016 at 6:48 PM, vdpyatkov <vldpyat...@gmail.com> wrote:

> Hi Binti,
>
> Hi,
> This is look like a lock GridCacheWriteBehindStore and
> GridCachePartitionExchangeManager.
>
> Could you give work an example of this?
> If not I try to reproduce it tomorrow
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Killing-a-node-under-load-stalls-the-
> grid-with-ignite-1-7-tp8130p8273.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Network Segmentation configuarion

2016-10-13 Thread Vladislav Pyatkov
Hi Yitzhak,

You can configure specific timeout in discovery SPI or increase common
timeout IgniteConfiguration#setFailureDetectionTimeout.
But long timeout lead to stop grid on time the timeout

If you want to handle segmentation event:

ignite.events().localListen(new IgnitePredicate() {
@Override public boolean apply(Event event) {
System.out.println("Execute custom logic...");

return true;
}
}, EventType.EVT_NODE_SEGMENTED);


But what will you do?
You cannot wait until something there, because cluster already segmented
(data will be rebalanced on other nodes) the node.

On Thu, Oct 13, 2016 at 12:40 PM, Yitzhak Molko <yitzhak.mo...@symbolab.com>
wrote:

> While I didn't configure any network segmentation properties
> (SegmentationResolvers, SegmentationResolveAttempts,
> SegmentCheckFrequency etc.) node is been shutdown from time to time:
> WARN : [discovery.tcp.TcpDiscoverySpi] Date=2016/10/13/07/42/52/009|Node
> is out of topology (probably, due to short-time network problems).
> WARN : [managers.discovery.GridDiscoveryManager]
> Date=2016/10/13/07/42/52/009|Local node SEGMENTED: TcpDiscoveryNode
> [id=4b3349f5-fda0-4e9d-a528-c8b5f4401717, addrs=[0:0:0:0:0:0:0:1%1,
> 10.0.0.5, 127.0.0.1], sockAddrs=[/127.0.0.1:47500,
> /0:0:0:0:0:0:0:1%1:47500, /10.0.0.5:47500], discPort=47500, order=271,
> intOrder=145, lastExchangeTime=1476344572001, loc=true,
> ver=1.7.0#20160801-sha1:383273e3, isClient=false]
> WARN : [managers.discovery.GridDiscoveryManager]
> Date=2016/10/13/07/42/52/092|Stopping local node according to configured
> segmentation policy.
>
> I would like to understand what is default behavior since I didn't
> configure any SegmentationResolvers.
> I can probably set SegmentationPolicy to NOOP to avoid node shutdown, but
> I don't think it's a good idea that node will out of topology for a long
> time.
> Is possible to set time/wait longer until getting SEGMENTED event?
>
> We are using ignite 1.7.0 and running cluster with 20 nodes.
>
> Thank you,
> Yitzhak
> --
>
> Yitzhak Molko
>



-- 
Vladislav Pyatkov


Re: Near cache

2016-10-13 Thread Vladislav Pyatkov
Hi,

Please clarify, are you enable the event (EVT_CACHE_OBJECT_REMOVED) in
config?
I guess, all node will be get EVT_CACHE_OBJECT_REMOVED in that case.






...

On Tue, Oct 11, 2016 at 8:56 PM, javastuff@gmail.com <
javastuff@gmail.com> wrote:

> Thank you for your reply. Would like to add more details to 3rd point as
> you
> have not clearly understood it.
>
> Lets assume there are 4 nodes running, node A brings data to distributed
> cache, as concept of Near Cahce I will push data to distributed cache as
> well as Node A will have it on heap in Map implementation. Later each node
> uses data from distributed cache and each node will now bring that data to
> their local heap based map implementation.
> Now comes the case of cache invalidation -  one of the node initiate REMOVE
> call and this will remove local heap copy for this acting node and
> distributed cache. This invokes EVT_CACHE_OBJECT_REMOVED event. However
> this
> event will be generated only on one node have that data in its partition
> (this is what I have observed, remote event for owner node and local event
> for acting node). In that case owner node has the responsibility to
> communicate to all other node to invalidate their local map based copy.
> So I am combining EVENT and TOPIC to implement this.
>
> Is this right approach? or there is a better approach?
>
> Cache remove event is generated only for owner node (node holding data in
> its partition) and node who is initiating remove API. Is this correct or it
> suppose to generate event for all nodes? Conceptually both have their own
> meaning and use, so I think both are correct.
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Near-cache-tp8192p8223.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: How to get the load status of the Ignite cluster

2016-10-13 Thread Vladislav Pyatkov
Hi,

You can look at the code[1] and get computation of all metrics.
The task use public API, but in particular information about thread pools
available through MBeans

*org.apache:clsLdr=764c12b6,group=Thread Pools,name=GridExecutionExecutor*

[1]: https://github.com/apache/ignite/blob/master/modules/
core/src/main/java/org/apache/ignite/internal/IgniteKernal.java#L1005

On Thu, Oct 13, 2016 at 4:24 AM, ght230 <ght...@163.com> wrote:

> From the work log, I can see the Metrics for local node, such as
> "^-- Public thread pool [active=0, idle=512, qSize=0]"
>
> I want to know which API of metrics can I use to get the value of "active",
> "idle" and "qsize".
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-to-get-the-load-status-of-the-
> Ignite-cluster-tp8232p8259.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: How to get the load status of the Ignite cluster

2016-10-12 Thread Vladislav Pyatkov
Hi,

You can use NodeMetric for estimate loading on machine:
o.a.i.cluster.ClusterNode#metrics
See the example[1] in sample project of Ignite or documentation[2].

Additional
You can try to justify loading by configuration tuning:
- Set the property quite small
o.a.i.configuration.CacheConfiguration#setMaxConcurrentAsyncOperations (500
by default)
- Decrease number of threads on public thread pool[3]

[1]:
examples\src\main\java\org\apache\ignite\examples\computegrid\cluster\ClusterGroupExample.java
[2]: http://apacheignite.gridgain.org/docs/cluster#cluster-node-metrics
[3]:
http://apacheignite.gridgain.org/v1.7/docs/performance-tips#configure-thread-pools

On Wed, Oct 12, 2016 at 2:57 PM, ght230 <ght...@163.com> wrote:

> I am running Ignite under below enviroment:
> Message Queue--->Ignite Cache--->Continuous Queries(trigger IgniteCompute).
>
> However, in some cases, the speed of message received from MQ too fast,
> resulting in too much burden on the computing grid, it will affect the
> stability of the cluster.
>
> I want to limit the speed of message reception according to the load status
> of the compute grid.
> How can I get the load status of the Ignite cluster?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-to-get-the-load-status-of-the-
> Ignite-cluster-tp8232.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Evicted entry appears in Write-behind cache

2016-10-12 Thread Vladislav Pyatkov
Hi Pradeep,

Could you please provide cache configuration?

On Tue, Oct 11, 2016 at 6:57 PM, Denis Magda <dma...@gridgain.com> wrote:

> Looks like that my initial understanding was wrong. There is a related
> discussion
> http://apache-ignite-users.70518.x6.nabble.com/Cache-
> read-through-with-expiry-policy-td2521.html
>
> —
> Denis
>
> On Oct 11, 2016, at 8:55 AM, Pradeep Badiger <pradeepbadi...@fico.com>
> wrote:
>
> Hi Denis,
>
> I did the get() on the evicted entry from the cache, it still returned me
> the value without calling the load() on the store. As you said, the entry
> would be cached in the write behind store even for the evicted entry. Is
> that true?
>
> Thanks,
> Pradeep V.B.
> *From:* Denis Magda [mailto:dma...@gridgain.com <dma...@gridgain.com>]
> *Sent:* Monday, October 10, 2016 9:13 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Evicted entry appears in Write-behind cache
>
> Hi,
>
> How do you see that the evicted entries are still in the cache? If you
> check this by calling cache get like operations then entries can be loaded
> back from the write-behind store or from your underlying store.
>
> —
> Denis
>
>
> On Oct 8, 2016, at 1:00 PM, Pradeep Badiger <pradeepbadi...@fico.com>
> wrote:
>
> Hi,
>
> I am trying to evaluate Apache Ignite and trying to explore eviction
> policy and write behind features. I am seeing that whenever a cache is
> configured with eviction policy and write behind feature, the write behind
> cache always have all the changed entries including the ones that are
> evicted, before the write cache is flushed. But soon after it is flushed,
> the store loads again from DB. Is this the expected behavior? Is there a
> documentation on how the write behind cache works?
>
> Thanks,
> Pradeep V.B.
> This email and any files transmitted with it are confidential, proprietary
> and intended solely for the individual or entity to whom they are
> addressed. If you have received this email in error please delete it
> immediately.
>
>
> This email and any files transmitted with it are confidential, proprietary
> and intended solely for the individual or entity to whom they are
> addressed. If you have received this email in error please delete it
> immediately.
>
>
>


-- 
Vladislav Pyatkov


Re: Loading Hbase data into Ignite

2016-10-11 Thread Vladislav Pyatkov
Hi,

The easiest way do this using DataStrimer[1] from all server nodes, but
with specific part of data.
You can do it using Ignite compute[2] (matching by node id for example or
node parameter or any other) with part number as a parameter to SQL query
for HDB.

[1]: http://apacheignite.gridgain.org/docs/data-streamers
[2]:
http://apacheignite.gridgain.org/docs/distributed-closures#broadcast-methods

On Tue, Oct 11, 2016 at 4:11 PM, Anil <anilk...@gmail.com> wrote:

> HI,
>
> we have around 18 M records in hbase which needs to be loaded into ignite
> cluster.
>
> i was looking at
>
> http://apacheignite.gridgain.org/v1.7/docs/data-loading
>
> https://github.com/apache/ignite/tree/master/examples
>
> is there any approach where each ignite node loads the data of one hbase
> region ?
>
> Do you have any recommendations ?
>
> Thanks.
>



-- 
Vladislav Pyatkov


Re: Near cache

2016-10-11 Thread Vladislav Pyatkov
Hi,

I think you will get long delay when updated local map entry, and general
issue you will be generate garbage, which increase garbage collector load
and consume of a heap.

The delay will not by long, but depends of communication layer.

I have not understand this. Why can not use one listener for some event
types[1]?

I think in your case, you need to use remote listener (which will invoke
not only in local node). You need use send, because sendOrdered give
additional performance load.

I can not say about this. You need to test implementation.

In general, you must not wait into listeners, because blocks will be bad
affect other task at the same thread pool. Also you always can tune thread
count in specific pool[2].

[1]: https://apacheignite.readme.io/docs/events
[2]:
https://apacheignite.readme.io/docs/performance-tips#tune-cache-data-rebalancing

On Tue, Oct 11, 2016 at 2:26 AM, javastuff@gmail.com <
javastuff@gmail.com> wrote:

> Hi,
>
> Because Ignite always serialize data, we are not able to take advantage of
> Near cache where cached object are heavy interms of
> serialization/de-serilalization and no requirement of query or eviction.
> We are taking about 5 min vs 2 hours difference, if we do not cache meta
> information of frequently accessed small number of objects.
>
> I am working on Java's standard map based implementation to simulate Near
> cache where local heap has non-serialized object backed by partitioned
> ignited distributed cache. Cache invalidation is a challenge where one node
> invalidates partitioned distributed cache and now we have to invalidate
> local copy of same cache form each each node.
>
> Need experts opinion on my POC -
>
> For a particular cache add EVT_CACHE_OBJECT_REMOVED cache event listener.
> Now remove cache event will be generated on local node and remote node who
> own that cache key.
> Remove event handler/actor publish a message to a topic.
> On receiving message on topic each node will remove object from local map
> based copy.
>
> Question -
> 1. Do you see any issue with this kind of implementation when cache
> invalidation will be very less, but read will be more frequent?
> 2. Approximately how much delay one should expect during event generation
> and publishing messages to all nodes?
> 3. I have observed remove Event is generated only on acting local node or
> owner node. So need to combine Event with Topic. Is there any other way
> this
> can be achieved?
> 4. In terms of performance should we use -
>  - Should we use local listener or remote listener? In POC I have added
> remote listener.
>  - Should we use sendOrderd or send?
> 4. Is there any specific sizing need to be done for REMOVE event and TOPIC?
> We have around 20 such cache and I am planning to have a single topic.
>
> Regarding topology - At minimum 2 nodes with 20 GB off-heap.
>
> Thanks,
> -Sam
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Near-cache-tp8192.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: IN Query

2016-10-11 Thread Vladislav Pyatkov
Hi,

Try do it like:

SqlFieldsQuery query = new
SqlFieldsQuery(sql).setArgs(inParameter.toArray(),
anotherINParameter.toArray())

If you have more the one args, the method will be work.

On Tue, Oct 11, 2016 at 3:08 PM, Anil <anilk...@gmail.com> wrote:

> ignite supports multiple IN queries ?  i tried the following and no luck.
>
> String sql = "select p.id, p.name from Person p join table(name
> VARCHAR(15) = ?) i on p.name = i.name join table(id varchar(15) = ?) k on
> p.id = k.id";
>
> SqlFieldsQuery query = new SqlFieldsQuery(sql).setArgs(new Object[] {
> inParameter.toArray()}, new Object[] { anotherINParameter.toArray()});
>
> Thanks
>



-- 
Vladislav Pyatkov


Re: Swap space

2016-10-10 Thread Vladislav Pyatkov
Hi,

You can to spy movement into swap space using Ignite event listener[1] on
event EventType#EVT_SWAP_SPACE_DATA_EVICTED

By default swap space directory is "swapspace" into working directory (if
you configured IGNITE_HOME the directory placed in it in "work"
sub-directory).

You can configure this like:


 ...
 
 
 
 
 
 ...
 

[1]: https://apacheignite.readme.io/docs/events

On Sat, Oct 8, 2016 at 8:47 AM, Anil <anilk...@gmail.com> wrote:

> Hi,
> I am little new to ignite and trying out few features.
> How to log swap space movement  ? what is the default location of swap on
> disk ?
>
> Thanks.
>



-- 
Vladislav Pyatkov


Re: IN Query

2016-10-10 Thread Vladislav Pyatkov
Anil,

Ignite has not itself DSL for SQL, but you can use any ANSI SQL generator.
Ignite supported ANSI-99 standard compatibility.

On Mon, Oct 10, 2016 at 3:04 PM, Anil <anilk...@gmail.com> wrote:

> Thank you Vladislav. it worked. my bad.. i missed that.
>
> Was there any java Query DSL for ignite queries ? Thanks.
>
> On 10 October 2016 at 17:30, Vladislav Pyatkov <vpyat...@gridgain.com>
> wrote:
>
>> Hi,
>>
>> Try to do it like this
>>
>> SqlFieldsQuery query = new SqlFieldsQuery(sql).setArgs(new Object[] {
>> inParameter.toArray() });
>>
>> This will by perform, because the method (SqlFieldsQuery.setArgs()) using
>> varargs.
>>
>> On Mon, Oct 10, 2016 at 2:47 PM, Anil <anilk...@gmail.com> wrote:
>>
>>> HI ,
>>>
>>> I am trying in query as given the below link and it is not working
>>>
>>> https://apacheignite.readme.io/docs/sql-queries#performance-
>>> and-usability-considerations
>>>
>>>
>>> sudo code :
>>>
>>> List inParameter = new ArrayList();
>>> inParameter.add("name0");
>>> inParameter.add("name1");
>>>
>>> String sql = "select p.id, p.name from Person p join table(id
>>> VARCHAR(15) = ?) i on p.name = i.id";
>>>
>>> SqlFieldsQuery query = new SqlFieldsQuery(sql).setArgs(in
>>> Parameter.toArray());
>>>
>>> Could you please point me the issue in the usage ?
>>>
>>> Thanks.
>>>
>>>
>>
>


-- 
Vladislav Pyatkov


Re: IN Query

2016-10-10 Thread Vladislav Pyatkov
Hi,

Try to do it like this

SqlFieldsQuery query = new SqlFieldsQuery(sql).setArgs(new Object[] {
inParameter.toArray() });

This will by perform, because the method (SqlFieldsQuery.setArgs()) using
varargs.

On Mon, Oct 10, 2016 at 2:47 PM, Anil  wrote:

> HI ,
>
> I am trying in query as given the below link and it is not working
>
> https://apacheignite.readme.io/docs/sql-queries#performance-and-usability-
> considerations
>
>
> sudo code :
>
> List inParameter = new ArrayList();
> inParameter.add("name0");
> inParameter.add("name1");
>
> String sql = "select p.id, p.name from Person p join table(id VARCHAR(15)
> = ?) i on p.name = i.id";
>
> SqlFieldsQuery query = new SqlFieldsQuery(sql).setArgs(
> inParameter.toArray());
>
> Could you please point me the issue in the usage ?
>
> Thanks.
>
>


Re: Forced Write behind on demand

2016-10-10 Thread Vladislav Pyatkov
Hi,

You cannot force write entries to store (even though you are using "write
behind"), but you can configure frequency or batch size[1].

[1]:
http://apacheignite.gridgain.org/v1.1/docs/persistent-store#configuration

On Sat, Oct 8, 2016 at 10:26 PM, Pradeep Badiger <pradeepbadi...@fico.com>
wrote:

> Hi,
>
>
>
> I am trying to validate ignite for one use case where I want to do the
> write behind on demand. Is there a way I can force flush with the
> write-behind feature?
>
>
>
> Thanks,
>
> Pradeep V.B.
> This email and any files transmitted with it are confidential, proprietary
> and intended solely for the individual or entity to whom they are
> addressed. If you have received this email in error please delete it
> immediately.
>



-- 
Vladislav Pyatkov


Re: Evicted entry appears in Write-behind cache

2016-10-10 Thread Vladislav Pyatkov
Hi,

"Write behind" is a feature of "write through", but when the feature was
used the entries will written asynchronously.
Hence If all entries hit into storage it is correct behavior.

Please provide reproduced example, If are you means another?

On Sat, Oct 8, 2016 at 11:00 PM, Pradeep Badiger <pradeepbadi...@fico.com>
wrote:

> Hi,
>
>
>
> I am trying to evaluate Apache Ignite and trying to explore eviction
> policy and write behind features. I am seeing that whenever a cache is
> configured with eviction policy and write behind feature, the write behind
> cache always have all the changed entries including the ones that are
> evicted, before the write cache is flushed. But soon after it is flushed,
> the store loads again from DB. Is this the expected behavior? Is there a
> documentation on how the write behind cache works?
>
>
>
> Thanks,
>
> Pradeep V.B.
> This email and any files transmitted with it are confidential, proprietary
> and intended solely for the individual or entity to whom they are
> addressed. If you have received this email in error please delete it
> immediately.
>



-- 
Vladislav Pyatkov


Re: Certificates for Encryption

2016-10-06 Thread Vladislav Pyatkov
Hi

Ignite requires jks trust store for storing trusted certificates[1].
If you have certificate file you can to import it into store like this:

*keytool -importcert -file certificate.cer -keystore keystore.jks -alias
"Alias"*

[1]: https://apacheignite.readme.io/docs/ssltls

On Thu, Oct 6, 2016 at 5:38 PM, styriver  wrote:

> Hello Our Unix team is asking if Ignite requires a keystore or can we just
> pass the location path to a certificate without having to import into a
> java
> keystore.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Certificates-for-Encryption-tp8115.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Time break up of SQL query execution in Apache Ignite

2016-10-05 Thread Vladislav Pyatkov
Hi,

Ignite does not provide additional metrics except explain[1] and local h2
console.
You can estimate network component, when executed local query across h2
console or with setLocal property.

[1]: https://apacheignite.readme.io/docs/sql-queries#using-explain

On Wed, Oct 5, 2016 at 5:33 PM, rishi007bansod <rishi007ban...@gmail.com>
wrote:

> Can I get time break up of SQL query in Apache Ignite in terms of how much
> time is spent by query in processing(computation part), networking, memory
> read and write?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Time-break-up-of-SQL-query-execution-in-Apache-Ignite-
> tp8101.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Unexpected flag value

2016-10-03 Thread Vladislav Pyatkov
Hi Dmitry,

Could you please check what will be if you implement  Externalizable
explicitly?

public class TaskOutput implements Externalizable {...}

On Mon, Oct 3, 2016 at 4:45 PM, dmitry.parkhonin 
wrote:

> It is not a response, it is a question.
>
> In addition to my original question:
> Just before the error there are the following lines in the log:
>
> 2016-10-03 13:28:08,788 DEBUG - Received peer class/resource loading
> request
> [node=7c7ae245-d3f2-40a5-a5fb-47fb18f97501, req=GridDeploymentRequest
> [rsrcName=ru/depsy/TaskOutput.class,
> ldrId=6d5e38a8751-c03371e4-49be-4b34-947d-20022b45dee7, isUndeploy=false,
> nodeIds=null]]
> [org.apache.ignite.internal.managers.deployment.
> GridDeploymentCommunication]
> [p2p-#147%pricingGridServer%] {}
> 2016-10-03 13:28:08,788 DEBUG - Sent peer class loading response
> [node=7c7ae245-d3f2-40a5-a5fb-47fb18f97501, res=GridDeploymentResponse
> [success=true, errMsg=null, byteSrc=GridByteArrayList [size=1198]]]
> [org.apache.ignite.internal.managers.deployment.
> GridDeploymentCommunication]
> [p2p-#147%pricingGridServer%] {}
> 2016-10-03 13:28:08,819 DEBUG - Send recovery acknowledgement
> [rmtNode=7c7ae245-d3f2-40a5-a5fb-47fb18f97501, rcvCnt=80]
> [org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi]
> [grid-nio-worker-0-#105%pricingGridServer%] {}
> 2016-10-03 13:28:08,819 DEBUG - Received grid job response message
> [msg=GridJobExecuteResponse [nodeId=7c7ae245-d3f2-40a5-a5fb-47fb18f97501,
> sesId=0e5e38a8751-c03371e4-49be-4b34-947d-20022b45dee7,
> jobId=1e5e38a8751-c03371e4-49be-4b34-947d-20022b45dee7, gridEx=null,
> isCancelled=false], nodeId=7c7ae245-d3f2-40a5-a5fb-47fb18f97501]
> [org.apache.ignite.internal.processors.task.GridTaskProcessor]
> [sys-#40%pricingGridServer%] {}
> 2016-10-03 13:28:08,835 ERROR - Failed to obtain remote job result policy
> for result from ComputeTask.result(..) method ...
>
> It seems to me that the error appears just after the TaskOuput class is
> loaded by remote classloader.
>
> The TaskOutput class:
>
> public class TaskOutput {
>
>   private final String taskId;
>   private final Throwable exception;
>   private final T output;
>
>   public TaskOutput(String taskId, T output, Throwable exception) {
> this.taskId = taskId;
> this.output = output;
> this.exception = exception;
>   }
>
>   public String getTaskId() {
> return taskId;
>   }
>
>   public T getOutput() {
> return output;
>   }
>
>   public Throwable getException() {
> return exception;
>   }
> }
>
> May the output field be the reason of the exception?
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Unexpected-flag-value-tp8050p8052.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Which ports does ignite cluster need to run normally?

2016-09-30 Thread Vladislav Pyatkov
Hi,

For the joining nodes to cluster, Communication and Discovery SPI will be
inough:

TcpDiscoverySpi:47500~47600
TcpCommunicationSpi:47100~47200

by default.

Others is optinal:

time server port:31100~31200
If you use time synchronization between nodes.

TCP server port:11211
for connect by internal http protocol.

Remote Management {com.sun.management.jmxremote.port}:49128
for JMX connection.

sharedMemoryPort:48100~48200
if using shared memory.

On Fri, Sep 30, 2016 at 10:00 AM, Level D <724172...@qq.com> wrote:

> Hi all,
>
> There is an active firewall in my system, and these following ports will
> be added to exceptions list.
>
> time server port:31100~31200
> TCP server port:11211
> Remote Management {com.sun.management.jmxremote.port}:49128
> TcpDiscoverySpi:47500~47600
> TcpCommunicationSpi:47100~47200
> sharedMemoryPort:48100~48200
>
> Is it enough?
>
> Regards,
>
> Zhou.
>
>>
>


-- 
Vladislav Pyatkov


Re: How to avoid the event lost in the continuous query

2016-09-27 Thread Vladislav Pyatkov
Hi,

Local listener need to present on side, where the CC will be invoked, and
remote listener need to be on all server nodes.

On Tue, Sep 27, 2016 at 9:59 AM, ght230 <ght...@163.com> wrote:

> How to deployment ContinuousQuery?
>
> Does remoteFilter deploy on Server side and localListener deploy on Client
> side?
> Or both remoteFilter and localListener deploy on Client side?
>
> Can you show me a detailed example?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-to-avoid-the-event-lost-in-the-
> continuous-query-tp7904p7961.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: AtomicSequence not working when shutting down one server node from a cluster

2016-09-26 Thread Vladislav Pyatkov
Hi,

You example required external connection for running it and does not have
atomicConfiguration, which was recommended.
Hence I created own sample, please look at attachment.
The example does not fail, until last one server would by online.

PS
You need to start ignite.sh with same configuration default-config.xml

On Fri, Sep 23, 2016 at 1:46 PM, hitansu <hitansu...@gmail.com> wrote:

> class org.apache.ignite.IgniteException: null
> at
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.
> java:908)
> at
> org.apache.ignite.internal.processors.datastructures.
> GridCacheAtomicSequenceImpl.incrementAndGet(GridCacheAtomicSequenceImpl.
> java:178)
> at
> ignite.IdGenerationServiceCacheLayer.getId(IdGenerationServiceCacheLayer.
> java:61)
> at
> idgen_service.IdGenerationServiceDataLayer.generateId(
> IdGenerationServiceDataLayer.java:56)
> at client.IdGenTask.generateSequnceId(IdGenTask.java:43)
> at client.IdGenTask.run(IdGenTask.java:33)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: class org.apache.ignite.IgniteCheckedException: null
> at
> org.apache.ignite.internal.processors.cache.GridCacheUtils.outTx(
> GridCacheUtils.java:921)
> at
> org.apache.ignite.internal.processors.datastructures.
> GridCacheAtomicSequenceImpl.internalUpdate(GridCacheAtomicSequenceImpl.
> java:255)
> at
> org.apache.ignite.internal.processors.datastructures.
> GridCacheAtomicSequenceImpl.incrementAndGet(GridCacheAtomicSequenceImpl.
> java:175)
> ... 5 more
> Caused by: java.lang.NullPointerException
> at
> org.apache.ignite.internal.processors.datastructures.
> GridCacheAtomicSequenceImpl$2.call(GridCacheAtomicSequenceImpl.java:504)
> at
> org.apache.ignite.internal.processors.datastructures.
> GridCacheAtomicSequenceImpl$2.call(GridCacheAtomicSequenceImpl.java:477)
> at
> org.apache.ignite.internal.processors.cache.GridCacheUtils$23.call(
> GridCacheUtils.java:1672)
> at
> org.apache.ignite.internal.processors.cache.GridCacheUtils.outTx(
> GridCacheUtils.java:915)
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/AtomicSequence-not-working-
> when-shutting-down-one-server-node-from-a-cluster-tp7770p7905.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov




http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd;>


		
	
		
		  
			
			

			
			

			
			  
			  
			  


  
	
	127.0.0.1:47543..47550
  

			  
			
		  
		

		
			





			
		
	



Main.java
Description: Binary data


Re: Apache Ignite cluster in AWS using IP without Multicast

2016-09-21 Thread Vladislav Pyatkov
Hi,

Ignite has a special IP finder for AWS. Look at the article[1].

[1]: https://apacheignite.readme.io/docs/aws-config

On Tue, Sep 20, 2016 at 10:39 PM, Mohammad Shariq <mohdsha...@gmail.com>
wrote:

>
> Hi,
>>
>> I am trying Ignite for serving caching needs.
>>
>>
>> I want to have a cluster of 2 instances in AWS, and want to use static IP
>> Finder. But ignite is not able to find the nodes in cluster and hanged on
>> the message below.
>>
>> *[18:49:54] Security status [authentication=off, tls/ssl=off]*
>>
>> My example ignite config for static ip finder is mentioned below. Here I
>> have tried with public Ip address of my AWS instance as well as private IP
>> address of AWS instance, but it didnt work and couldn't find the cluster
>> nodes.
>>
>> *My Ignite Config for Static ip finder is below:*
>>
>> > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>> 
>>  
>> 
>>         
>> 11.33.50.14:47500..47509
>> 11.33.49.180:47500..47509
>> 
>>
>>
>>
>>


-- 
Vladislav Pyatkov


Re: [EXTERNAL] Re: Failed to write class name to file: java.io.FileNotFoundException

2016-09-21 Thread Vladislav Pyatkov
Ensure, that you have read and execute permissions to all child directories
as well.

*chmod o+rx /opt/ignite/apache-ignite-fabric-1.6.0-bin/work/marshaller/*

On Wed, Sep 21, 2016 at 4:58 PM, chevy <chetan.v.ya...@target.com> wrote:

> Yes, I have given rwxrwxrwx to ‘marshaller’ folder and rwxr-xr-x
> permissions to files inside it. Please suggest if I need to change anything
> here.
>
>
>
> --
>
> Regards,
>
> Chetan.
>
>
>
> *From: *"vdpyatkov [via Apache Ignite Users]" <ml-node+[hidden email]
> <http:///user/SendEmail.jtp?type=node=7858=0>>
> *Date: *Wednesday, September 21, 2016 at 7:22 PM
> *To: *"Chetan.V.Yadav" <[hidden email]
> <http:///user/SendEmail.jtp?type=node=7858=1>>
> *Subject: *[EXTERNAL] Re: Failed to write class name to file:
> java.io.FileNotFoundException
>
>
>
> Hi,
>
> Are you sure, your application have enough permission on write to the
> directory (/opt/ignite/apache-ignite-fabric-1.6.0-bin/work/marshaller/)?
>
>
>
> On Wed, Sep 21, 2016 at 12:43 PM, chevy <[hidden email]> wrote:
>
> Hi,
>
> I am getting below error when I try to add data to cache. It used to work
> earlier with no issues. I am using Ignite version 1.6.
>
> [ERROR][main][MarshallerContextImpl] Failed to write class name to file
> [id=-1398818952, clsName=com.target.ignite.model.sales.SalesModel,
> file=/opt/ignite/apache-ignite-fabric-1.6.0-bin/work/
> marshaller/-1398818952.classname]
> java.io.FileNotFoundException:
> /opt/ignite/apache-ignite-fabric-1.6.0-bin/work/marshaller/-1398818952.
> classname
> (Permission denied)
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Failed-to-write-class-name-to-file-java-
> io-FileNotFoundException-tp7855.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
>
>
>
>
> --
>
> Vladislav Pyatkov
>
>
> --
>
> *If you reply to this email, your message will be added to the discussion
> below:*
>
> http://apache-ignite-users.70518.x6.nabble.com/Failed-to-
> write-class-name-to-file-java-io-FileNotFoundException-tp7855p7857.html
>
> To unsubscribe from Failed to write class name to file:
> java.io.FileNotFoundException, click here.
> NAML
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
> --
> View this message in context: Re: [EXTERNAL] Re: Failed to write class
> name to file: java.io.FileNotFoundException
> <http://apache-ignite-users.70518.x6.nabble.com/Failed-to-write-class-name-to-file-java-io-FileNotFoundException-tp7855p7858.html>
>
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Failed to write class name to file: java.io.FileNotFoundException

2016-09-21 Thread Vladislav Pyatkov
Hi,
Are you sure, your application have enough permission on write to the
directory (/opt/ignite/apache-ignite-fabric-1.6.0-bin/work/marshaller/)?

On Wed, Sep 21, 2016 at 12:43 PM, chevy <chetan.v.ya...@target.com> wrote:

> Hi,
>
> I am getting below error when I try to add data to cache. It used to work
> earlier with no issues. I am using Ignite version 1.6.
>
> [ERROR][main][MarshallerContextImpl] Failed to write class name to file
> [id=-1398818952, clsName=com.target.ignite.model.sales.SalesModel,
> file=/opt/ignite/apache-ignite-fabric-1.6.0-bin/work/
> marshaller/-1398818952.classname]
> java.io.FileNotFoundException:
> /opt/ignite/apache-ignite-fabric-1.6.0-bin/work/marshaller/-1398818952.
> classname
> (Permission denied)
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Failed-to-write-class-name-to-file-java-
> io-FileNotFoundException-tp7855.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: AtomicSequence not working when shutting down one server node from a cluster

2016-09-20 Thread Vladislav Pyatkov
Hi,

For REPLICATED cache the property:
 
is not matter.

Could you please provide code sample, where the issue is reproduced?

On Tue, Sep 20, 2016 at 10:33 AM, hitansu  wrote:

> First of all why it is still showing this post is not accepted ?
> I tried with the backup option & also tried with the REPLICATED mode cache
> setting.Still it gives null poiter
> when I stop one of the server(basically the first node).This is my cache
> setting.
>
> 
>
>
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/AtomicSequence-not-working-
> when-shutting-down-one-server-node-from-a-cluster-tp7770p7839.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Services not injected into CacheStore if deployed using Spring configuration

2016-09-15 Thread Vladislav Pyatkov
Hello,

I begin to think different... life cycle of CacheStore and life cycle of
Service may be absolutely different.

Service can be undeploy (like this
ignite.services().cancel(CacheStoreBackend.SERVICE_NAME)), when cache (with
store) continue existing.

In that case, I think, get service just in time, when it needed, is a best
way.
Try to suggest this on dev list (d...@ignite.apache.org) before.

On Thu, Sep 15, 2016 at 3:23 PM, Evgeniy Ignatiev <
yevgeniy.ignat...@gmail.com> wrote:

> I am going to file issue, also there is one point I want to clarify before
> doing this: having already configured and run cluster, upon deployments to
> a newly joined node, services and caches are started independently, and
> creation of a cache store for a cache may occur before all of the services
> intended to be injected are deployed, this will result in the same behavior
> as in the example I provided. Should I include this case into the issue?
>
>
> On 9/15/2016 2:52 PM, vdpyatkov wrote:
>
>> I think, you are right.
>> Could you plese create issue for Ignite in Apache Jira[1]?
>>
>> Until you can to use something like this:
>>
>>  Ignite ig = Ignition.ignite();
>>  backend = ig.services().service(CacheStoreBackend.SERVICE_NAME);
>>
>> as workaround.
>>
>> [1]: https://issues.apache.org/jira/secure/Dashboard.jspa
>>
>>
>> YevIgn wrote
>>
>>> Hello, everyone.
>>>
>>> Recently, while using service injection in custom CacheStore
>>> implementation, we faced the problem that with startup deployment of
>>> services and caches through Spring, the services are not injected into
>>> CacheStore. It doesn't happen when we deploy our services and caches
>>> manually, so we assume the behavior is not intended.
>>>
>>> Here is the small demo proejct:
>>> https://github.com/YevIgn/ignite-cache-store-service-injection - when
>>> running example.Main I get NPE on cache.put(..)
>>>
>>> Could you, please, look into it and advise on possible resolution of
>>> this issue?
>>>
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Services-not-injected-into-CacheStore-if-
>> deployed-using-Spring-configuration-tp7708p7761.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


-- 
Vladislav Pyatkov


  1   2   3   >