Re: .Net ICacheEntryFilter

2018-03-21 Thread Alexey Popov
Hi,

Can you share a simple reproducer project for [InstanceResource] within
ICacheEntryFilter?

I can't find unit tests for that case in Apache Ignite source code.

> but the field _ignite is null and we have no access to cache to get it. 
That sounds like a bug that should be fixed.

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Logging using Log4Net

2018-02-13 Thread Alexey Popov
Igg.zip   



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Logging using Log4Net

2018-02-13 Thread Alexey Popov
Hi,

Hm, actually, you've got several log4net misconfigurations in your solution.
Everything was fine with Ignite config itself ).
Please see updated minified solution attached. Just "build a solution" and
it will restore all required packages.

The following log4net misconfigurations fixed:

1. Your app.config file section  was ignored, an explicit call
XmlConfigurator.Configure() is required for log4net.

2. you should always have  logger in  section.

3.  name attribute could not be abstract. It should have some
package name.

4. %property{LogName} from xml should be initialized in global context
before usage. Please see call 
log4net.GlobalContext.Properties["LogName"] =
System.Reflection.Assembly.GetExecutingAssembly().GetName().Name;

Please find more info about log4net at [1]

Thank you,
Alexey

[1] https://logging.apache.org/log4net/release/manual/configuration.html




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Logging using Log4Net

2018-02-12 Thread Alexey Popov
Hi,

There could be several issues. Unfortunately, you just provided some config
snippets.

First of all, please add  to your appender
RollingLogFileAppender config.

Then, please ensure that your log4net configuration section  is
actually used.
It is better to have a separate file log4net.config file.

Please share a simple reproducer project if you still face any issue.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Remote client

2018-01-18 Thread Alexey Popov
Hi,

Does your remote client has public IP?

Please note that client node should be accessible for TCP incoming
connections from a server node, i.e. the server node should be able to
connect to client IP as well.

Your client IP is 192.168.8.132. And your server node is not able to connect
it.

So that is a real root cause for your issue.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteOutOfMemoryException when using putAll instead of put

2018-01-17 Thread Alexey Popov
Hi Larry,

I checked the code.
The issue is specific to your test data.
You have relatively large initial entries (4.7 Kbytes) with the same index
length (it is a just String). Please note that the index can't fit into the
single page (4K).

The rest of entries during .get() (from Store) are relatively short (just
"foo" word).
It seems that Ignite can't make a correct eviction threshold estimation
(including index) in your case.

If you change .setEvictionThreshold(.9) to .setEvictionThreshold(.8) with
the same test data then everything works as expected.

Anyway, I will open a ticket for your reproducer.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteOutOfMemoryException when using putAll instead of put

2018-01-15 Thread Alexey Popov
Hi Larry,

I am without my PC for a while. I will check the file you attached later
this week.

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite best practice

2018-01-12 Thread Alexey Popov
Hi Sergey,

There could not be an exact answer to your question. It depends mostly on
your use-case.

1. first of all, you should look at CPU/mem/network usage
2. and then you should check SQL debugging guide, starting from EXPLAIN for
your query. Please see [1] for details.
3. you could enable dataRegionMetrics and dataStorageMetrics as it described
at [2] and compare values with small/huge load/data capacity
4. you could also enable Cache metrics [3]

[1] https://apacheignite-sql.readme.io/docs/performance-and-debugging
[2] https://apacheignite.readme.io/docs/memory-metrics
[3] https://apacheignite.readme.io/docs/cache-metrics

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: No user-defined default MemoryPolicy found

2018-01-12 Thread Alexey Popov
I checked the code

Actually, you can ignore this warning ("WARNING: No user-defined default
MemoryPolicy found; system default of 1GB size will be used."). 

Apache Ignite applies your defaultMemoryPolicySize


Or you can change your config to avoid such warning:




















Thanks,
Alexey







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Use SQL to query IgniteRDD with scala at Zeppelin

2018-01-12 Thread Alexey Popov
Hi,

Please note that REST protocol puts keys as String. So you have 3 Integer
keys and 1 String key in a cache.
That is the reason for your results.

You can find details in topic [1]

[1]
http://apache-ignite-users.70518.x6.nabble.com/Rest-API-PUT-command-syntax-tc19158.html

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: failed to find sql table for type:

2018-01-12 Thread Alexey Popov
Hi,

Do you still have this issue?

Can you share your code?

Ignite.NET solution has a QueryExample.cs, you can use it as a reference code.

Thank you,
Alexey

From: kenn_thomp...@qat.com
Sent: Wednesday, January 10, 2018 5:56 PM
To: user@ignite.apache.org
Subject: failed to find sql table for type:

Ignite.NET 2.3.0

I'm pulling data from another DB and loading into Ignite node started under
.NET Core console app. After converting the datarows into binary objects and
putting them into the cache, I both do a count as well as debug and see the
objects in the cache with the correct data.

I try pulling the objects out of the cache using SqlQuery, and get the
exception. I'm not sure what to check, but at one point yesterday I was able
to do it successfully. I'm not sure what I changed and have been unable to
walk whatever change I thought I made back.

Where should I dig to get this resolved? This is all running on a vanilla
ignite node.


Exception has occurred: CLR/Apache.Ignite.Core.Common.IgniteException
An unhandled exception of type 'Apache.Ignite.Core.Common.IgniteException'
occurred in Apache.Ignite.Core.dll: 'Failed to find SQL table for type:
trebuchetsettings'
 Inner exceptions found, see $exception in variables window for more
details.
 Innermost exception Apache.Ignite.Core.Common.JavaException : class
org.apache.ignite.IgniteCheckedException: Failed to find SQL table for type:
trebuchetsettings
at
org.apache.ignite.internal.processors.platform.utils.PlatformUtils.unwrapQueryException(PlatformUtils.java:519)
at
org.apache.ignite.internal.processors.platform.cache.PlatformCache.runQuery(PlatformCache.java:1220)
at
org.apache.ignite.internal.processors.platform.cache.PlatformCache.processInStreamOutObject(PlatformCache.java:874)
at
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: javax.cache.CacheException: class
org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to
find SQL table for type: trebuchetsettings
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:597)
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:368)
at
org.apache.ignite.internal.processors.platform.cache.PlatformCache.runQuery(PlatformCache.java:1214)
... 2 more
Caused by: class
org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to
find SQL table for type: trebuchetsettings
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSql(IgniteH2Indexing.java:1248)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$8.applyx(GridQueryProcessor.java:2068)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$8.applyx(GridQueryProcessor.java:2066)
at
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2445)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.queryDistributedSql(GridQueryProcessor.java:2065)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySql(GridQueryProcessor.java:2045)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:582)
... 4 more



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: off heap memory usage

2018-01-12 Thread Alexey Popov
Hi Colin,

You should use TotalAllocatedPages if you have persistence disabled.

There is an know issue with PhysicalMemoryPages:
https://issues.apache.org/jira/browse/IGNITE-6963

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteOutOfMemoryException when using putAll instead of put

2018-01-12 Thread Alexey Popov
Hi, 

You are right, "evicts=0" is related to cache evictions for on-heap caching
[1]. It should be always 0 for you.

I tried your case (with the same configs as you) and page evictions work
fine with cache store enabled and indexed types. It seems that you have some
misconfiguration.

What are you trying to achieve by adding
.setIndexedTypes(keytag.runtimeClass, valtag.runtimeClass) to String-value
cache? and what is keytag.runtimeClass and valtag.runtimeClass?

Could you please try with DummyClass with valid indexes enabled as below:

/**
 * DummyClass
 */
public class DummyClass {
/** Dummy string. */
public String dummyStr;

/** Dummy int. */
@QuerySqlField(index = true)
public Integer dummyInt;

public DummyClass(Integer dummyInt) {
this.dummyInt = dummyInt;
this.dummyStr = StringUtils.rightPad(dummyInt.toString(), 1024,
'*');
}
}

CacheConfiguration cacheCfg = new
CacheConfiguration(CACHE_NAME)
.setCacheMode(CacheMode.PARTITIONED)
.setAtomicityMode(CacheAtomicityMode.ATOMIC)
   
.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(Duration.ETERNAL))
.setDataRegionName(REG_NAME)
.setStatisticsEnabled(true)
   
.setCacheStoreFactory(FactoryBuilder.factoryOf(DummyStoreFromAdapter.class))
.setReadThrough(true)
.setIndexedTypes(Integer.class, DummyClass.class);

Thanks,
Alexey

[1] https://apacheignite.readme.io/docs/evictions#section-java-heap-cache



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: No user-defined default MemoryPolicy found

2018-01-11 Thread Alexey Popov
Hi,

Please have a look at topic below to get an estimation of memory usage:

http://apache-ignite-users.70518.x6.nabble.com/off-heap-memory-usage-tc19282.html

I will check the warning later, it looks strange to me.

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: No user-defined default MemoryPolicy found

2018-01-11 Thread Alexey Popov
ok. I see you are at 2.1. (migrating from 2.1 to 2.3)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: No user-defined default MemoryPolicy found

2018-01-11 Thread Alexey Popov
Hi,

I see the same message with 2.1 release. 2.3 does not have it.

I will check 2.1 source code later.

Do you use 2.1 release?

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteOutOfMemoryException when using putAll instead of put

2018-01-10 Thread Alexey Popov
Hi,

You are right, cache.putAll() can't evict the entries from the batch it is
working on, and you can get Ignite OOME.
This is expected behavior because putAll get locks for all provided entry
keys. That is critical:
1) for transactional caches and 
2) any caches backed up by 3-rd party persistence store.

There was an intention to optimize this behavior for atomic caches without
cache store [1] but it seems it will not be implemented. So, you could rely
on this behavior.  

[1] https://issues.apache.org/jira/browse/IGNITE-514.

Thank you,
Alexey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ClassCastException using cache and continuous query

2018-01-10 Thread Alexey Popov
Hi Diego,

It seems that your error is related to different class Loaders being used. 

I don't have an idea why this happens but please try to clean your "work"
directory in Ignite home (IGNITE_HOME) after 1.8 -> 2.3 upgrade or set up a
new IGNITE_HOME.

Please share you node configs and IgniteFetchItem class if you still face
the issue.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Creating multiple Ignite grids on same machine

2018-01-10 Thread Alexey Popov
Hi Raymond,

In your case you should configure:

1. different TcpDiscoverySpi local ports
2. different ports for TcpDiscoveryVmIpFinder (Vm = Static for .NET). You
should not use a default ipFinder.
3. different TcpCommunicationSpi local ports

Please see sample Java xml-configs as a reference sample. You can do the
similar things with Ignite.Net 2.3 configuration.

Sample cluster 1 cfg:










127.0.0.1:48500..48509












 

Sample cluster 2 cfg:










127.0.0.1:47500..47509












 

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Why does Ignite de-allocate memory regions ONLY during shutdown?

2018-01-09 Thread Alexey Popov
Hi John,

Could you please re-phrase you question.
What issue are you trying to solve?

Probably, you should address your question directly to dev-list. 

Anyway, you can read about memory region architecture at [1] and [2]. Hope it 
helps.

[1] https://apacheignite.readme.io/docs/memory-architecture
[2] 
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Durable+Memory+-+under+the+hood

Thank you,
Alexey

From: John Wilson
Sent: Tuesday, January 9, 2018 1:58 AM
To: user@ignite.apache.org
Subject: Why does Ignite de-allocate memory regions ONLY during shutdown?

Hi,

I was looking at the UnsafeMemoryProvide and it looks to me that allocated 
direct memory regions are deallocated only during shutdown.

https://github.com/apache/ignite/blob/c5a04da7103701d4ee95910d90ba786a6ea5750b/modules/core/src/main/java/org/apache/ignite/internal/mem/unsafe/UnsafeMemoryProvider.java#L80

https://github.com/apache/ignite/blob/c5a04da7103701d4ee95910d90ba786a6ea5750b/modules/core/src/main/java/org/apache/ignite/internal/mem/unsafe/UnsafeMemoryProvider.java#L63

My question:

if a memory region has been allocated and, during execution, all data pages 
that are in the region are removed, then why is the memory region de-allocated?

Thanks,





Re: dotnet thin client - multiple hosts?

2018-01-09 Thread Alexey Popov
Hi Colin,

There is a ticket for this improvement.
https://issues.apache.org/jira/browse/IGNITE-7282

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Apache Ignite & unixODBC and truncating text

2018-01-08 Thread Alexey Popov
Hi Bagsiur,

It is some mono lib error below, not a user-code.

Can you share a simple code reproducer? Just to see the order of actions in 
your code.

Thank you,
Alexey

From: bagsiur
Sent: Tuesday, January 9, 2018 10:36 AM
To: user@ignite.apache.org
Subject: Re: Apache Ignite & unixODBC and truncating text

So, I use Apache Ignite in 2.3.0 version.

Im not C# programmer but I compile attached C# example by mono:

mcs -out:odbcvarchar.exe -r:System.dll -r:System.Data.dll odbcvarchar.cs

And after when I run it I have fallowing errors:

[ERROR] FATAL UNHANDLED EXCEPTION: System.EntryPointNotFoundException:
LocalAlloc
  at (wrapper managed-to-native)
System.Data.Common.SafeNativeMethods:LocalAlloc (int,intptr)
  at System.Data.ProviderBase.DbBuffer..ctor (System.Int32 initialSize,
System.Boolean zeroBuffer) [0x0002f] in <23f11074adc346a7bcd76c006d949301>:0
  at System.Data.ProviderBase.DbBuffer..ctor (System.Int32 initialSize)
[0x0] in <23f11074adc346a7bcd76c006d949301>:0
  at System.Data.Odbc.CNativeBuffer..ctor (System.Int32 initialSize)
[0x0] in <23f11074adc346a7bcd76c006d949301>:0
  at System.Data.Odbc.OdbcCommand.GetStatementHandle () [0x00033] in
<23f11074adc346a7bcd76c006d949301>:0
  at System.Data.Odbc.OdbcCommand.ExecuteReaderObject
(System.Data.CommandBehavior behavior, System.String method, System.Boolean
needReader, System.Object[] methodArguments, System.Data.Odbc.ODBC32+SQL_API
odbcApiMethod) [0x00019] in <23f11074adc346a7bcd76c006d949301>:0
  at System.Data.Odbc.OdbcCommand.ExecuteReaderObject
(System.Data.CommandBehavior behavior, System.String method, System.Boolean
needReader) [0x0001c] in <23f11074adc346a7bcd76c006d949301>:0
  at System.Data.Odbc.OdbcCommand.ExecuteNonQuery () [0xa] in
<23f11074adc346a7bcd76c006d949301>:0
  at OdbcVarchars.Program.Main (System.String[] args) [0x00023] in
<873514a817794f1297b9f0b2be86f821>:0




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: How to make full use of network bandwidth?

2018-01-08 Thread Alexey Popov
Hi Michael,

Did you try non-default parameters for 
1) socketSendBuffer and socketReceiveBuffer [1] in JDBC connection string?
2) socketSendBufferSize and socketReceiveBufferSize [2] in Ignite server
node configuration for SqlConnectorConfiguration?

Please change them to 128k and make a try.

JDBC thin driver works in a single thread in one-by-one request-by-response
SQL statement execution. I am not sure you can fully utilize a network here. 
Probably, you could use an IgniteDataStreamer to achieve better results.
Please have a look at [3].
Or you should create multiple JDBC connections and parallel your inserts in
multiple threads.

[1]
https://apacheignite-sql.readme.io/docs/jdbc-driver#section-jdbc-thin-driver
[2]
https://apacheignite-sql.readme.io/docs/jdbc-driver#section-cluster-configuration
[3] https://apacheignite.readme.io/docs/data-loading

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: off heap memory usage

2018-01-08 Thread Alexey Popov
Hi Colin,

Unfortunately, you can't get the exact off-heap size.

There are several tickets here 
https://issues.apache.org/jira/browse/IGNITE-6814
https://issues.apache.org/jira/browse/IGNITE-5583
http://apache-ignite-users.70518.x6.nabble.com/Cache-size-in-Memory-td17226.html

You are using (pages * pageSize * pagesFillFactor) correctly (if you have
only one memory region).

BTW, pagesFillFactor is calculated using FreeLists [1] buckets. FreeLists
has a limited amount of buckets (for instance, approx 25% free, ~50% free,
~75% free).
In this case (pages * pageSize * pagesFillFactor) will give you an approx
value of memory usage (based on buckets) and you can see drops in the usage
when a page is moved from one bucket to another.

[1]
https://apacheignite.readme.io/docs/memory-architecture#section-free-lists

Thanks,
Alexey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: java.lang.IllegalStateException: Cache has been closed:

2018-01-08 Thread Alexey Popov
Hi Raj,

IgniteCache implements AutoCloseable interface. [1]
You can't access it once IgniteCache.close() is called (implicitly via
try-with-resource or explicitly).

[1]
https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html

Thanks,
Alexey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite & unixODBC and truncating text

2018-01-08 Thread Alexey Popov
Hi Bagsuir,

What Ignite version are you using?

Could you please share a reproducible example of the issue you have?
Or can you try the C# example attached (VS 2017)?

Thank you,
Alexey

odbcvarchar.cs
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Adresses of client's nodes in configuration file

2018-01-07 Thread Alexey Popov
Hi Alex,

You are right. You just need IP addresses/hosts of server nodes in the
configuration file.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Error of start with multiple data regions

2017-12-26 Thread Alexey Popov
Sorry. Please ignore my response ). I just misread you message. 

Thank you,
Alexey

From: Alexey Popov
Sent: 26 декабря 2017 г. 12:05
To: user@ignite.apache.org
Subject: Re: Error of start with multiple data regions

Hi,

Apache Ignite does not have this functionality out of the box and the
community could not help with your question. 
You should ask this question directly to the company who provides a multiple
data regions solution.

Thank you,
Alexey





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Error of start with multiple data regions

2017-12-26 Thread Alexey Popov
Hi,

Apache Ignite does not have this functionality out of the box and the
community could not help with your question. 
You should ask this question directly to the company who provides a multiple
data regions solution.

Thank you,
Alexey





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Memory foot print of the ignite nThe background cache eviction process was unable to free [10] percent of the cache for Context

2017-12-25 Thread Alexey Popov
Naveen,

I am not sure I understand you correctly.

Ignite off-heap usage is a part of your VIRT "top" output. You can add DATA
column to "top" output to better visualize memory usage of pure
on-heap+off-heap data only.

Please run Ignite with different memory settings and see the memory
footprint.

   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
  
SWAP   CODEDATA
  6833 root  20   0 3725072 242744  17740 S   1.0 12.0   0:11.02 java   
 
0  4 1427232

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Memory foot print of the ignite nThe background cache eviction process was unable to free [10] percent of the cache for Context

2017-12-20 Thread Alexey Popov
Hi Naveen,

Off-heap memory "Off-heap size" works strangely.
There are several tickets here
https://issues.apache.org/jira/browse/IGNITE-6814
https://issues.apache.org/jira/browse/IGNITE-5583
http://apache-ignite-users.70518.x6.nabble.com/Cache-size-in-Memory-td17226.html

You can get an estimation of off-heap memory usage by  *  (4k default for 2.3)
 ^-- PageMemory [pages=2043040] 
So, it is about 2043040 * 4096 = 8.3 Gb

Does it look realistic for you data models?

Please note that all Caches are stored off-heap.
On-heap memory used for data transfer, "hot" caches on top of off-heap
caches, etc.

Regarding your error: 
Are you sure you use Ignite here for cache variable?
It looks like org.apache.catalina.webresources.Cache is used instead of
Ignite for your code

res = cache.get(custid).toString(); 

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Redis problem with values > 8kb

2017-12-11 Thread Alexey Popov
Thanks, Wolfram. 

Good to know you have not blocked by this issue.
Anyway, the issue is identified and will be fixed in the next release.

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Redis problem with values > 8kb

2017-12-08 Thread Alexey Popov
Hi Wolfram,

I reproduced the issue at my env and created a ticket
https://issues.apache.org/jira/browse/IGNITE-7153.
Unfortunately, I did not find any workaround with Redis client you use.

Thank you,
Alexey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Locking rows within Ignite Transactions

2017-12-04 Thread Alexey Popov
Hi,

Please note that cache.get(N) does not lock all "rows" inside a transaction.
It just locks "row" with N key.

Actually, you should not lock data that is not involved in the transaction.
That is why the explicit lock is prohibited. It should be done out of
transaction scope.

If your case requires some specific lock inside transaction scope you can
use cache.get(). If you Cache entry (i.e. "row") is huge and you don't need
all the data, you can reduce some overhead by returning binary
(unmarshalled) data via cache.withKeepBinary().get().

Thank you,
Alexey


 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Does the Ignite C# client support distributed queues?

2017-12-01 Thread Alexey Popov
Hi Raymond,

You are right, distributed queues require changes at Ignite core. 
https://issues.apache.org/jira/browse/IGNITE-2701
It was created almost 1 year ago. Please vote for this feature.
As far as I know there is no plan/schedule for it. 

Thank you,
Alexey

From: Raymond Wilson
Sent: Friday, December 1, 2017 5:58 AM
To: user@ignite.apache.org
Cc: d...@ignite.apache.org
Subject: RE: Does the Ignite C# client support distributed queues?

Looking at it I see it's blocked by 2701 (which has additional
dependencies, all of which say they are blocked by 2701).

I understand there is an intention to bring the C# client up to par with
the Java client. Is there a ticket/schedule yet for this?

Raymond.

-Original Message-
From: vkulichenko [mailto:valentin.kuliche...@gmail.com]
Sent: Friday, December 1, 2017 1:30 PM
To: user@ignite.apache.org
Subject: RE: Does the Ignite C# client support distributed queues?

Oops, I read wrong! This is not supported. There is a ticket, but it
doesn't seem to be active at the moment:
https://issues.apache.org/jira/browse/IGNITE-1417

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Retrieving keys very slow from a partitioned, no backup, no replicated cache

2017-11-30 Thread Alexey Popov
Hi Anand,

Ignite will collect a batch of updates for multiple operations if you enable
write-behind.
So, it will be done for entry.setvalue() within Cache.invoke for your case.

And then Ignite will make a writeall() call for the batch.

If your own CacheStore implementation does not override writeall() method
then a default implementation will be used:
foreach (entry in a batch) {
  write(entry);
}

So, please implement writeall() with respect to your legacy DB to have a
performance boost for batch updates.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Redis problem with values > 8kb

2017-11-27 Thread Alexey Popov
Wolfram,

The buffer size is hardcoded now, but it could be made configurable if it
the real issue.

Can you share a simple PHP sample?
I think Java Unit tests may miss some important details you have with PHP. I
wonder if I can test & find the missing part of the puzzle ).

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Redis problem with values > 8kb

2017-11-27 Thread Alexey Popov
Hi Wolfram,

I just run unit tests for Redis with 10k string. They passed without errors.
Can you share a reproducible example?

Actually, the issue happens inside java.nio.HeapByteBuffer. What jdk/jre
version do you use?

BTW, this buffer size is set to 8k and it should be re-used in your case.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Indexes on custom objects

2017-11-27 Thread Alexey Popov
Hi Daniels,

Actually, you can have & use indexes for single (1 to 1 relation) nested
objects. You can't have indexes for nested collections.

"Indexes for nested *collections* and in particular for *maps* are not 
supported.". 

I wonder why Examples does not have that. Please see how it could be used:

public class Model implements Serializable {
@QuerySqlField
private CustomObject obj;

public Model(String sortField) {
this.obj = new CustomObject(sortField);
}
}

public class CustomObject {
@QuerySqlField(index = true)
private String objName;

public CustomObject(String objName) {
this.objName = objName;
}
}

Sample queries:

List res = cache.query(new SqlQuery<>(Model.class, "ORDER BY
objName")).getAll();
res = cache.query(new SqlQuery<>(Model.class,
"objName=?").setArgs(1)).getAll();

Thank you,
Alexey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache queries - Failed to run map query remotely

2017-11-24 Thread Alexey Popov
1. I am not sure that "ORDER BY" works the way you expect ). Very probably it
sorts by some object hash instead casting it to the specific derived type.
2. I don't think you can do it directly. But there are two workaround
options here:
a) keep Object as is, create a new String field just for searching/ordering,
i.e.
@QuerySqlField(index = true)
private String sortFieldKey;

private Object sortField;

b) you can have several caches for each dynamic type you have

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite 2.3.0 Console logging handler is not configured error in start-up

2017-11-16 Thread Alexey Popov
Hi Sumanta,

Please have a look at https://issues.apache.org/jira/browse/IGNITE-6828

This error message comes from Spring framework
(org.springframework.data:spring-data-commons).
It is not Ignite message and does not affect Ignite logging, so you can just
ignore it.

To avoid such message you could do the following:
1. Configure Slf4j logging for Spring
2. Exclude Slf4j dependency from Spring in your pom file:

org.springframework.data
spring-data-commons
some verion here


org.slf4j
jcl-over-slf4j



Thank you,
Alexey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: AW: NullPointer in GridAffinityAssignment.initPrimaryBackupMaps

2017-11-15 Thread Alexey Popov
Lukas,

You are right, IGNITE_HOME could not be a problem in embedded mode.

Do you have any specific Affinity Function parameters in Cache
configuration?

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Prohibit "Node is out of topology"

2017-11-14 Thread Alexey Popov
Hi Lukas,

Several "Finished serving remote node connection" shows that the node was
cut off the cluster (some socket issues)
Please enable DEBUG level for logs to see more details about possible
reasons (socket errors, malformed messages, etc)

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: NullPointer in GridAffinityAssignment.initPrimaryBackupMaps

2017-11-14 Thread Alexey Popov
Hi Lukas,

It seems that a new node is just misconfigured and it can't correctly assign
partitions by Affinity Function.

Please set a valid IGNITE_HOME and verify that Affinity Function [1] matches
the cluster config

2017-11-11 06:10:34:973 + [main] INFO
org.apache.ignite.internal.IgniteKernal - IGNITE_HOME=null

[1]
https://apacheignite.readme.io/docs/affinity-collocation#section-affinity-function

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error serialising arrays using Ignite 2.2 C# client

2017-11-13 Thread Alexey Popov
Hi Raymond,

You are right. True multidimensional arrays are not supported now in binary
serialized (C#). 
Jugged arrays work fine. So, you can use them or just one-dimensional array
with 2D-index calculation.

Anyway, I opened a ticket: https://issues.apache.org/jira/browse/IGNITE-6896
You can track a progress on this issue.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Node failed to startup due to deadlock

2017-11-13 Thread Alexey Popov
Ups, there is a some issue with text formatted below. It should be

just a replace 
cache2.lock("fake");
with 
Lock lock = ignite0.reentrantLock("fake", true, false, true);
where ignite0 is a "final" copy for a new Thread 
final Ignite ignite0 = ignite;

Thank you,
Alexey

From: Alexey Popov
Sent: Monday, November 13, 2017 2:12 PM
To: user@ignite.apache.org
Subject: RE: Node failed to startup due to deadlock

Hi Naresh,

I still don't have a clear understanding of your case. Very probably you
just need a Cache Store with Read-Through enabled. Please have a look at [1]
and Cache Store examples. 

As for code provided - you can have a workaround here until
https://issues.apache.org/jira/browse/IGNITE-6380 is ready. Please use
Ignite.reentrantLock() instead of transactional cache entry lock, i.e:

just a replace 

with 

where ignite0 is a "final" copy for a new Thread


[1] https://apacheignite.readme.io/docs/3rd-party-store

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Node failed to startup due to deadlock

2017-11-13 Thread Alexey Popov
Hi Naresh,

I still don't have a clear understanding of your case. Very probably you
just need a Cache Store with Read-Through enabled. Please have a look at [1]
and Cache Store examples. 

As for code provided - you can have a workaround here until
https://issues.apache.org/jira/browse/IGNITE-6380 is ready. Please use
Ignite.reentrantLock() instead of transactional cache entry lock, i.e:

just a replace 

with 

where ignite0 is a "final" copy for a new Thread


[1] https://apacheignite.readme.io/docs/3rd-party-store

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: write behind performance impacting main thread. Write behindbuffer is never full

2017-11-13 Thread Alexey Popov
Hi Larry,

BTW, there is an open improvement
https://issues.apache.org/jira/browse/IGNITE-5003
for the issue you faced.

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Node failed to startup due to deadlock

2017-11-09 Thread Alexey Popov
Hi Naresh,

Unfortunately I don’t see any attachment, please send them again.

Can you clarify your node startup activity.
1) How do you lock the caches? IgniteCache.lock() or IgniteCache.lockAll() 
locks only a one or batch of cache entries, not the whole Cache.
2) Do you really need some locks if the cache is just cleared
3) How do you perform a dataload? Via Streaming API or via CacheStore (i.e. 
IgniteCache.loadCache())?
4) I see that you have transactional Caches, how do you expect transaction to 
be finished during new Node startup & cache cleanup?

Please note that Ignite Caches are distributed and self-balanced. You don’t 
need to reload a Cache when a new node enters a cluster. The cluster will 
rebalance the caches it have. And the new node will receive a part of Cache 
entries without any extra actions from your side.

Thank you,
Alexey

From: naresh.goty
Sent: Thursday, November 9, 2017 9:54 AM
To: user@ignite.apache.org
Subject: RE: Node failed to startup due to deadlock

Hi Alexey,

Thank you for pointing out the problem with caches causing deadlocks.
I fixed the classpath pointing to 1.9, but the issue still exists.

Actually we could reproduce deadlock with a scenario when two nodes trying
to comeup during same time, and the first node holds lock on a cache. 
(attached sample code when run in the following order:
1) Start App.java and wait till the cache is locked
2) start App2.java, then we see deadlock on App2. 

We also tried with patch provided in (IGNITE-6380), but it is rejecting jobs
which are waiting.
ex=class o.a.i.compute.ComputeExecutionRejectedException: Pending topology
found - job execution within lock or transaction was canceled., hasRes=true,
isCancelled=false, isOccupied=true]
class org.apache.ignite.IgniteException: Remote job threw exception.

To summarize our node startup process:
1) Each node upon start, it will create or get the caches. 
2) Lock the caches
3) if caches not already loaded, then it will clear the caches, and perform
dataload
4) notify through ignite messaging about cache load activities
5) release lock on caches.

So, if multiple nodes are starting at the same time, all the nodes will try
to perform the above activities. During this process we will ensure that,
each cache will be loaded only once.  

Can you please confirm if the above usecase is supported with ignite?

Thanks
Naresh
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: write behind performance impacting main thread. Write behindbuffer is never full

2017-11-08 Thread Alexey Popov
Hi Larry,

Please note that applyBatch() takes locks after the call to updateStore() while 
flushCacheCoalescing() takes them before updateStore().
UpdateStore() does all the work with persisting your data via CacheStore, so it 
is could be quite long & blocking operation for one flushing thread. It seems 
that you can have some gain here coalescing with turning off coalesce.

1. There are no changes here b/w 2.3 and 2.1.
2. Could you please clarify your proposal for the fix? 
For your use case: 
a) if you enable coalescing you will have ONE update (within a batch) to DB 
while you perform multiple updates to the same Cache entry. So the DB load is 
reduced here but it costs some locking overhead to resolve updates
b) If you disable coalescing you will have MULTIPLE updates to DB while you 
perform multiple updates to the same Cache entry. It will reduce locking 
overhead but loads your DB more heavily
You can find some balance by tuning batch size, # of flushing threads 
with/without coalescing.

Thank you,
Alexey

From: Larry Mark
Sent: Wednesday, November 8, 2017 2:27 AM
To: user@ignite.apache.org
Cc: Henry Olschofka
Subject: Re: write behind performance impacting main thread. Write behindbuffer 
is never full

Alexey,

I dug into this a bit more and it is the perfect storm of the way the write 
behind works and the way we are using one of our Caches.  We need to keep our 
kafka offsets persisted, so we have a cache with the Key being a topic and 
partition.  When we get a record from that combination we update the value.  
When we are very busy we are constantly getting messages, and the contents of 
the message gets distributed to many caches, but the offset is to the same 
cache with the same key.  When that gets flushed to disk the coalesce keeps 
locking that key, and is in contention with the main thread trying to update 
the key.  Turning off coalesce does not seem to help, first of all if I am 
reading the code correctly it is still going to take locks in applyBatch after 
the call to updateStore and if we have not coalesced we will take the lock on 
the same value over and over.  Also, because we rewrite that key constantly, 
without coalesce the write behind cannot keep up.  

Now that we understand what is going on we can work around this.  

Two quick questions:
- We are on 2.1, is there anything changed in this area in 2.3 that might make 
this better.
- Is this use case of updating the same key unique to us, or is this common 
enough that there should be a fix to the coalesce code?  

Best,

Larry


On Fri, Nov 3, 2017 at 5:14 PM, Larry Mark <larry.m...@principled.io> wrote:
Alexey,

With our use case setting the coalesce off will probably make it worse, for at 
least some caches we are doing many updates to the same key, one of the reasons 
I am setting the batch size to 500.

I will send the cachestore implementation and some logs that show the 
phenomenon early next week.  Thanks for your help.

Larry 

On Fri, Nov 3, 2017 at 12:11 PM, Alexey Popov <tank2.a...@gmail.com> wrote:
Hi,

Can you share your cache store implementation?

It could be several reasons for possible performance degradation in
write-behind mode.
Ignite can start flushing your cache values in main() thread if cacheSize
becomes greater than 1,5 x setWriteBehindFlushSize. It is a common case but
it does not look like your case.

WriteBehind implementation could use ReentrantReadWriteLock while you
insert/update the Cache entries in your main thread. WriteBehind background
threads use these locks when they read and flush entries.
Such WriteBehind implementation is used when writeCoalescing is turned on by
setWriteBehindCoalescing(true); BTW, the default value is TRUE. Actually, it
makes sense only when you configure several flush threads
(setWriteBehindFlushThreadCount(X)) to have a real concurrency in multiple
reads and writes.

It is hard to believe that it could slow down your main() thread, but please
check: just add setWriteBehindCoalescing(false) to your config and try your
tests again.

Thanks,
Alexey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/





RE: Node failed to startup due to deadlock

2017-11-07 Thread Alexey Popov
Hi Naresh,

I see deadlocks with 
1. com.rover.core.dao.model.admin.AdminPreferenceTable (2 locks)
2. com.rover.core.dao.model.product.assembly.AssemblyUpi2BomTable (2 locks)
3. com.rover.core.dao.model.product.assembly.AssemblyBomUpi2PackageTable,
4. com.rover.core.dao.model.product.ProductVersionInfoTable

Could you please provide a reproducible sample for this issue to move further?

I don’t have a clear understanding what can cause this issue and I could not 
reproduce it.


BTW, Please check your configuration, I see versions mismatch (2.3 vs 1.9) in 
Ignite output:

>>> Ignite ver. 2.3.0-SNAPSHOT#19691231-sha1:DEV
INFO: IGNITE_HOME=C:\development\software\apache-ignite-fabric-1.9.0-bin

Please update IGNITE_HOME to a correct path.

Thank you,
Alexey

From: naresh.goty
Sent: Monday, November 6, 2017 5:36 AM
To: user@ignite.apache.org
Subject: RE: Node failed to startup due to deadlock

Hi Alexey,

We are still seeing the deadlocks for the scenario i have specified earlier.
We tried the below two changes, but still seeing the deadlocks. Can you
please provide some pointers about the issue based on the logs and
threaddumps attached.

1) As Rajeev mentioned, we tried offloading event handling to application
threads.
2) Applied the patch fix
(https://issues.apache.org/jira/browse/IGNITE-6380#), but still the same
issue.


Node1.log
  
Node2.log
  

Node1_AfterNode2_Turndown.tdump

  

Node1_AfterNode2_StartedAgain.tdump

  

Node2_AfteStartedAgain.tdump

  


Regards,
Naresh





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: write behind performance impacting main thread. Write behind buffer is never full

2017-11-03 Thread Alexey Popov
Hi,

Can you share your cache store implementation?

It could be several reasons for possible performance degradation in
write-behind mode.
Ignite can start flushing your cache values in main() thread if cacheSize
becomes greater than 1,5 x setWriteBehindFlushSize. It is a common case but
it does not look like your case.

WriteBehind implementation could use ReentrantReadWriteLock while you
insert/update the Cache entries in your main thread. WriteBehind background
threads use these locks when they read and flush entries.
Such WriteBehind implementation is used when writeCoalescing is turned on by
setWriteBehindCoalescing(true); BTW, the default value is TRUE. Actually, it
makes sense only when you configure several flush threads
(setWriteBehindFlushThreadCount(X)) to have a real concurrency in multiple
reads and writes. 

It is hard to believe that it could slow down your main() thread, but please
check: just add setWriteBehindCoalescing(false) to your config and try your
tests again.

Thanks,
Alexey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Node failed to startup due to deadlock

2017-11-03 Thread Alexey Popov
Rajeev,

I see “WARNING: Failed to wait for partition release future” section in your 
logs. 
It includes “WARNING: Pending explicit locks:” with 7 cache locks 
(GridCacheExplicitLockSpan)
GridCacheExplicitLockSpan details don’t have specific information about cache 
operation but they have topology version.
(topVer=3).

So, very probably these locks were taken on some event when Node 2 left the 
cluster

7:50:31 PM 
INFO: Topology snapshot [ver=2, servers=2, clients=0, CPUs=8, heap=5.5GB]
8:00:15 PM 
INFO: Topology snapshot [ver=3, servers=1, clients=0, CPUs=8, heap=2.0GB]
8:08:17 PM 
INFO: Topology snapshot [ver=4, servers=2, clients=0, CPUs=8, heap=5.5GB]

Thank you,
Alexey

From: rajivgandhi
Sent: Thursday, November 2, 2017 7:43 PM
To: user@ignite.apache.org
Subject: Re: Node failed to startup due to deadlock

Can you please also help us understand, how you diagnized the symptom? It is
hard to understand deadlocks for distributed operations.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Node failed to startup due to deadlock

2017-11-02 Thread Alexey Popov
Hi Rajeev,

All Ignite callbacks are processed in (=called from) internal Ignite thread
pools.
The generic rule here is to avoid direct Ignite calls from such callbacks.
So, it is better to do all your things in separate threads (use
ExecutorService for instance).
You should pass all work to your own threads and release internal Ignite
threads by returning from callbacks.

Sample code:

final ExecutorService executorService =
Executors.newFixedThreadPool(10);

ignite.events().localListen(new IgnitePredicate() {
@Override public boolean apply(Event event) {
executorService.execute(new Runnable() {
public void run() {
// do all the job here
ignite.log().error("Event: " + event.name());
}
});
return true;
}
}, EventType.EVTS_DISCOVERY);

I checked your logs, there are deadlocks at Cache operations.

Thanks,
Alexey.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to create string representation of binary object.

2017-11-01 Thread Alexey Popov
Ankit,

That looks very strange.
Your class does not have registrationInfoResponse field which is mentioned
in the error.

Please confirm that node with id="b2df236f-4fba-4794-b0e4-4e040581ba9d" is a
part of your load testing cluster.

Do you have peerClassLoadingEnabled=true at your configs?

Thanks,
Alexey






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node failed to startup due to deadlock

2017-11-01 Thread Alexey Popov
Naresh, Rajeev

I've looked through the logs.
Node2 is stuck because it is waiting for Node1
(id=96eea214-a2e3-4e0a-b8c6-7fdf3f29727c) partition map exchange response.
So it looks like the real deadlock is at Node1
(id=96eea214-a2e3-4e0a-b8c6-7fdf3f29727c)

Can you also send node1 logs and thread dumps from it?

I will check them and let you know if it is a known/fixed issue.

BTW, please ensure that your event listeners at node1 do not update Ignite
caches/etc. It could be the similar case as in
[http://apache-ignite-users.70518.x6.nabble.com/Threads-waiting-on-cache-reads-tt17802.html]

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to create string representation of binary object.

2017-10-31 Thread Alexey Popov
Hi Ankit,

I see two lines in the exception below:

"Failed to read field: registrationInfoResponse "
"com.partygaming.services.mds.userprofile.api.UserRegistrationInfoResponse"

It seems that UserRegistrationInfoResponse.registrationInfoResponse becomes
null / incorrect during your test and it could not be unmarshaled by Ignite.

Could you please share your class UserRegistrationInfoResponse details 

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Deadlock on Node Remove and Add

2017-10-31 Thread Alexey Popov
Hi,

Is it the same issue
http://apache-ignite-users.70518.x6.nabble.com/Node-failed-to-startup-due-to-deadlock-tc17839.html?

Can you try the build from a master branch?

Thanks,
Alexey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node failed to startup due to deadlock

2017-10-31 Thread Alexey Popov
Hi Naresh,

Can you share your log from Node 2 and configuration file?
The issue is related to Cache PartitionExchange procedure during node
startup.

Very probably this issue is already fixed on master branch.

Thank you in advance,
Alexey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Retrieving keys very slow from a partitioned, no backup, no replicated cache

2017-10-30 Thread Alexey Popov
Hi Anand,

Can you share your CacheStore implementation? 
Do you use CacheStoreAdapter<>?
Very probably you don't have your own "CacheStore.writeAll()" implementation
that is used for batches. And the default one is used (it just sequentially
calls "CacheStore.write()" for all entries in a batch).

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Retrieving keys very slow from a partitioned, no backup, no replicated cache

2017-10-26 Thread Alexey Popov
Hi Anand,

>In regards to your comment on cache will not get updated using invoke. How 
>do I ensure that the new computed value gets stored in the cache. 
>The first goal is to iterate through the cache and update a specific data 
>field in the cache in the fastest way possible. 
>The second goal is to - the cache has writethrough/writebehind enabled and 
>hence the updates will need to be propagated to the database as well. 

1. Just don't forget to call entry.setValue(val) at the final step of your
invoke() to store an updated value:

cache.invoke(key, (entry, args) -> { 
Fact val = entry.getValue(); 
// do some logic 
val.setAmount(val.getAmt1() +
val.getAmt2()); 
// save results
entry.setValue(val);
return val; 
}); 
It would not add any visible processing time.
You could try batch requests here with invokeAll() (as you have in your code
before) to achieve a better performance.

2. All such updates should be automatically propagated by Ignite to DB via
CacheStore if you have writethrough enabled.

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Retrieving keys very slow from a partitioned, no backup, no replicated cache

2017-10-25 Thread Alexey Popov
Hi Anand,

1. Actually, you should broadcast your logic to all nodes via
Ignite.compute()
Please see the sample below:

Collection res = ignite.compute().broadcast(
new IgniteCallable() {
/** Auto-inject ignite instance. */
@IgniteInstanceResource
private Ignite ignite;

@Override public Integer call() {
IgniteCache cache =
ignite.getOrCreateCache(CACHE_NAME);
Iterator> iterator =
cache.localEntries().iterator();

Integer key;
Integer cnt = 0;
while (iterator.hasNext()) {
key = iterator.next().getKey();

Integer res = cache.invoke(key, (entry, args) -> {
Integer val = entry.getValue();
// do some logic
val = val + 1;
return val;
});
cnt++;
}
return cnt;
}
}
);

// just ensure that we went through all keys
int entryCount = 0;
for (Integer r : res) {
entryCount += r;
}

2. And please note that cache.invokeAll() from your code does not store a
new cache entry value back to the Cache. So, you will not see any updates in
the Cache after such invokes.

"val.setAmount(val.getAmt1() + val.getAmt2());"

Thank you,
Alexey





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/