Re: Using more memory than is physically available

2018-08-09 Thread Denis Magda
If you don't want to lose the data, which doesn't fit in RAM, on restarts
then go for Ignite persistence.

If it's fine to lose the data on restarts then the OS swapping is a good
option as well:
https://apacheignite.readme.io/docs/swap-space

--
Denis

On Thu, Aug 9, 2018 at 2:33 PM nitin.phad...@optum.com <
nitin.phad...@optum.com> wrote:

> Will Apache Ignite allow me to use more memory than is physically available
> on my system?
>
> If yes, Do I need to turn persistence on?
>
> If I do not need to turn persistence on , how do I specify the location on
> disk to which memory will be swapped?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Transaction Throughput in Data Streamer

2018-08-09 Thread Dave Harvey
We are trying to load and transform a large amount of data using the
IgniteDataStreamer using a custom StreamReceiver.We'd like this to run
a lot faster, and we cannot find anything that is close to saturated,
except the data-streamer threads, queues.   This is 2.5, with Ignite
persistence, and enough memory to fit all the data.

I was looking to turn some knob, like the size of a thread pool, to
increase the throughput, but I can't find any bottleneck.   If I turn up
the demand, the throughput does not increase, and the per transaction
latency increases. This would indicate a bottleneck somewhere.

The application has loaded about 900 million records of type A at this
point, and now we would like to load 2.5B records of type B.Records of
type A have a key and a unique ID.Records of type B have a different
key type, plus a foreign field that is A's unique ID.   The key we use in
ignite for record B is (B's key, A's key as affinity key). We also
maintain caches to map A's ID back to its key, and something similar for B.

For each record the stream receiver starts a pessimistic transaction,we
will end up with 1 local gets and 2-3 gets with no affinity (i.e. 50% local
on two nodes), and 2-4 puts, before we commit the transaction.  (FULL_SYNC
caches). There are a several fields with indices.

I've simplified this down to two nodes, with 4 cache caches each with one
backup, all with WAL LOGGING disabled.  The two nodes have 256GB of memory
and 32 CPUs and local SSDs that are unmirrored (i3.8xlarge on AWS). The
network is supposed to be 10 Gb.   The dataset is basically in memory, and
with the WAL disabled there is very little I/O.

The WAL logging disable only pushed the transaction rate from about 1750
to about 2000 TPS.

The CPU doesn't get above 20%, the network bandwidth is  only about 6MB/s
from each node and only about 1500 packets per second per node.   The read
wait time on the SSDs  is only enough to lock up a single thread, and there
are no writes except during checkpoints.

When I look at thread dumps, there is no obvious bottleneck except for the
Datastreamer threads.  Doubling the number of DataStreamer threads from
current 64 to 128 has no effect on throughput.

Looking via MXbeans, where I have a fix for IGNITE-7616, the DataStreamer
pool is saturated.   The "Striped Executor" is not.  With the WAL enabled,
the "StripedExecutor" shows some bursty load, when disabled the active
threads are queue low.  The work is distributed across the StripedExecutor
threads.   The nonDataStreamer  thread pools all frequently go to 0 active
threads, while the DataStreamer pool stays backed up.

With the WAL on with 64 DataStreamer threads, there tended to be able 53
"Owner transactions" on the node.

A snapshot of transactions outstanding follows.

Is there another place to look?   The DS threads tend top be waiting on
futures,  and the other threads are consistent with the relatively

THanks
-DH

f0a49c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [dae1a619-4886-4001-8ac5-6651339c67b7
[ip-172-17-0-1.ec2.internal, ip-10-32-98-209.ec2.internal]], DURATION:
104

33549c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [6d3f06d6-3346-4ca7-8d5d-b5d8af2ad12e
[ip-172-17-0-1.ec2.internal, ip-10-32-97-243.ec2.internal],
dae1a619-4886-4001-8ac5-6651339c67b7 [ip-172-17-0-1.ec2.internal,
ip-10-32-98-209.ec2.internal]], DURATION: 134

b0949c53561--08a9-7ea9--0002=ACTIVE, NEAR, DURATION: 114

2ca49c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [6d3f06d6-3346-4ca7-8d5d-b5d8af2ad12e
[ip-172-17-0-1.ec2.internal, ip-10-32-97-243.ec2.internal]], DURATION:
104

96349c53561--08a9-7ea9--0002=PREPARED, NEAR, DURATION: 134

9ca49c53561--08a9-7ea9--0002=ACTIVE, NEAR, DURATION: 104

28f39c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [dae1a619-4886-4001-8ac5-6651339c67b7
[ip-172-17-0-1.ec2.internal, ip-10-32-98-209.ec2.internal]], DURATION:
215

a2649c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [dae1a619-4886-4001-8ac5-6651339c67b7
[ip-172-17-0-1.ec2.internal, ip-10-32-98-209.ec2.internal]], DURATION:
124

e7849c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [6d3f06d6-3346-4ca7-8d5d-b5d8af2ad12e
[ip-172-17-0-1.ec2.internal, ip-10-32-97-243.ec2.internal]], DURATION:
114

06849c53561--08a9-7ea9--0002=ACTIVE, NEAR, DURATION: 114

89849c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [6d3f06d6-3346-4ca7-8d5d-b5d8af2ad12e
[ip-172-17-0-1.ec2.internal, ip-10-32-97-243.ec2.internal]], DURATION:
114

35549c53561--08a9-7ea9--0002=ACTIVE, NEAR, DURATION: 134

f0449c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [dae1a619-4886-4001-8ac5-6651339c67b7
[ip-172-17-0-1.ec2.internal, ip-10-32-98-209.ec2.internal]], DURATION:
134


Using more memory than is physically available

2018-08-09 Thread nitin.phad...@optum.com
Will Apache Ignite allow me to use more memory than is physically available
on my system?

If yes, Do I need to turn persistence on?

If I do not need to turn persistence on , how do I specify the location on
disk to which memory will be swapped?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Spark to Ignite Data load, Ignite node crashashing

2018-08-09 Thread ApacheUser
attaching log of the tow nodes crashing everytime, I have 4 nodes but the
other two nodes ver rarely crashed. All nodes(VM) are 4CPU/16GB RAm/200GB
HDD(Shared Storage)

node 3:
[16:35:21,938][INFO][main][IgniteKernal] 

>>>__    
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 2.6.0#20180710-sha1:669feacc
>>> 2018 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

[16:35:21,946][INFO][main][IgniteKernal] Config URL:
file:/data/ignitedata/apache-ignite-fabric-2.6.0-bin/config/default-config.xml
[16:35:21,954][INFO][main][IgniteKernal] IgniteConfiguration
[igniteInstanceName=null, pubPoolSize=8, svcPoolSize=8, callbackPoolSize=8,
stripedPoolSize=8, sysPoolSize=8, mgmtPoolSize=4, igfsPoolSize=4,
dataStreamerPoolSize=8, utilityCachePoolSize=8,
utilityCacheKeepAliveTime=6, p2pPoolSize=2, qryPoolSize=8,
igniteHome=/data/ignitedata/apache-ignite-fabric-2.6.0-bin,
igniteWorkDir=/data/ignitedata/apache-ignite-fabric-2.6.0-bin/work,
mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6f94fa3e,
nodeId=df202ccb-356f-426a-8131-e2cc0b9bf98f,
marsh=org.apache.ignite.internal.binary.BinaryMarshaller@3023df74,
marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000,
sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0,
marsh=null, reconCnt=10, reconDelay=2000, maxAckTimeout=60,
forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null],
segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true,
allResolversPassReq=true, segChkFreq=1, commSpi=TcpCommunicationSpi
[connectGate=null, connPlc=null, enableForcibleNodeKill=false,
enableTroubleshootingLog=false,
srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@6302bbb1,
locAddr=null, locHost=null, locPort=47100, locPortRange=100, shmemPort=-1,
directBuf=true, directSndBuf=false, idleConnTimeout=60,
connTimeout=5000, maxConnTimeout=60, reconCnt=10, sockSndBuf=32768,
sockRcvBuf=32768, msgQueueLimit=0, slowClientQueueLimit=1000, nioSrvr=null,
shmemSrv=null, usePairedConnections=false, connectionsPerNode=1,
tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=32,
unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null, boundTcpPort=-1,
boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null,
ctxInitLatch=java.util.concurrent.CountDownLatch@31304f14[Count = 1],
stopping=false,
metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@34a3d150],
evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@2a4fb17b,
colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null],
indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@7cc0cdad,
addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1,
txCfg=org.apache.ignite.configuration.TransactionConfiguration@7c7b252e,
cacheSanityCheckEnabled=true, discoStartupDelay=6, deployMode=SHARED,
p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100,
timeSrvPortRange=100, failureDetectionTimeout=1,
clientFailureDetectionTimeout=3, metricsLogFreq=6, hadoopCfg=null,
connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@4d5d943d,
odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration
[seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null,
grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=null,
binaryCfg=null, memCfg=null, pstCfg=null, dsCfg=DataStorageConfiguration
[sysRegionInitSize=41943040, sysCacheMaxSize=104857600, pageSize=0,
concLvl=0, dfltDataRegConf=DataRegionConfiguration [name=default,
maxSize=10737418240, initSize=268435456, swapPath=null,
pageEvictionMode=DISABLED, evictionThreshold=0.9, emptyPagesPoolSize=100,
metricsEnabled=true, metricsSubIntervalCount=5,
metricsRateTimeInterval=6, persistenceEnabled=true,
checkpointPageBufSize=0], storagePath=/data/ignitedata/data,
checkpointFreq=18, lockWaitTime=1, checkpointThreads=4,
checkpointWriteOrder=SEQUENTIAL, walHistSize=20, walSegments=10,
walSegmentSize=67108864, walPath=/root/ignite/wal,
walArchivePath=db/wal/archive, metricsEnabled=true, walMode=LOG_ONLY,
walTlbSize=131072, walBuffSize=0, walFlushFreq=2000, walFsyncDelay=1000,
walRecordIterBuffSize=67108864, alwaysWriteFullPages=false,
fileIOFactory=org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIOFactory@4c583ecf,
metricsSubIntervalCnt=5, metricsRateTimeInterval=6,
walAutoArchiveAfterInactivity=-1, writeThrottlingEnabled=false,
walCompactionEnabled=false], activeOnStart=true, autoActivation=true,
longQryWarnTimeout=500, sqlConnCfg=null,
cliConnCfg=ClientConnectorConfiguration [host=null, port=10800,
portRange=100, sockSndBufSize=0, sockRcvBufSize=0, tcpNoDelay=true,
maxOpenCursorsPerConn=128, threadPoolSize=8, 

Re: No response from ignite job tracker

2018-08-09 Thread engrdean
Still looking for any suggestions on this.  Anyone have any ideas for next
steps to troubleshoot?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Proper config for IGFS eviction

2018-08-09 Thread engrdean
I've been struggling to find a configuration that works successfully for IGFS
with hadoop filesystem caching.  Anytime I attempt to load more data than
what will fit into memory on my Ignite node, the ignite process crashes.  

The behavior I am looking for is that old cache entries will be evicted when
I try to write new data to IGFS that exceeds the available memory on the
server.  I can see that my data is being persisted into HDFS, but I seem to
be limited to the amount of physical memory on my Ignite server at the
moment.  I am using the teragen example to generate the files on hadoop for
the purposes of this test like so:

time hadoop-ig jar
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar
teragen 1 igfs://i...@myserver.com/tmp/output1

If I have systemRegionMaxSize set to a value less than the physical memory
on my ignite server, then the message is something like this:

/class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Out of
memory in data region [name=sysMemPlc, initSize=1.0 GiB, maxSize=14.0 GiB,
persistenceEnabled=false] Try the following:
  ^-- Increase maximum off-heap memory size
(DataRegionConfiguration.maxSize)
  ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
  ^-- Enable eviction or expiration policies
/
If I increase the systemRegionMaxSize to a value greater than the physical
memory on my ignite server, the message is something like this:

/[2018-08-09 12:16:08,174][ERROR][igfs-#171][GridNearTxLocal] Heuristic
transaction failure.
class
org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException:
Failed to locally write to cache (all transaction entries will be
invalidated, however there was a window when entries for this transaction
were visible to others): GridNearTxLocal [mappings=IgniteTxMappingsImpl [],
nearLocallyMapped=false, colocatedLocallyMapped=true, needCheckBackup=null,
hasRemoteLocks=false, trackTimeout=false, lb=null, thread=igfs-#171,
mappings=IgniteTxMappingsImpl [], super=GridDhtTxLocalAdapter
[nearOnOriginatingNode=false, nearNodes=[], dhtNodes=[], explicitLock=false,
super=IgniteTxLocalAdapter [completedBase=null, sndTransformedVals=false,
depEnabled=false, txState=IgniteTxStateImpl [activeCacheIds=[-313790114],
recovery=false, txMap=[IgniteTxEntry [key=KeyCacheObjectImpl [part=504,
val=IgfsBlockKey [fileId=c976b6f1561-689b0ba5-6920-4b52-a614-c2360d0acff4,
blockId=52879, affKey=null, evictExclude=true], hasValBytes=true],
cacheId=-313790114, txKey=IgniteTxKey [key=KeyCacheObjectImpl [part=504,
val=IgfsBlockKey [fileId=c976b6f1561-689b0ba5-6920-4b52-a614-c2360d0acff4,
blockId=52879, affKey=null, evictExclude=true], hasValBytes=true],
cacheId=-313790114], val=[op=CREATE, val=CacheObjectByteArrayImpl
[arrLen=65536]], prevVal=[op=NOOP, val=null], oldVal=[op=NOOP, val=null],
entryProcessorsCol=null, ttl=-1, conflictExpireTime=-1, conflictVer=null,
explicitVer=null, dhtVer=null, filters=[], filtersPassed=false,
filtersSet=true, entry=GridDhtCacheEntry [rdrs=[], part=504,
super=GridDistributedCacheEntry [super=GridCacheMapEntry
[key=KeyCacheObjectImpl [part=504, val=IgfsBlockKey
[fileId=c976b6f1561-689b0ba5-6920-4b52-a614-c2360d0acff4, blockId=52879,
affKey=null, evictExclude=true], hasValBytes=true], val=null,
startVer=1533830728270, ver=GridCacheVersion [topVer=145310277,
order=1533830728270, nodeOrder=1], hash=-915370253,
extras=GridCacheMvccEntryExtras [mvcc=GridCacheMvcc
[locs=[GridCacheMvccCandidate [nodeId=6ed33eb9-2103-402c-afab-a415c8f08f2f,
ver=GridCacheVersion [topVer=145310277, order=1533830728268, nodeOrder=1],
threadId=224, id=258264, topVer=AffinityTopologyVersion [topVer=1,
minorTopVer=0], reentry=null,
otherNodeId=6ed33eb9-2103-402c-afab-a415c8f08f2f, otherVer=GridCacheVersion
[topVer=145310277, order=1533830728268, nodeOrder=1], mappedDhtNodes=null,
mappedNearNodes=null, ownerVer=null, serOrder=null, key=KeyCacheObjectImpl
[part=504, val=IgfsBlockKey
[fileId=c976b6f1561-689b0ba5-6920-4b52-a614-c2360d0acff4, blockId=52879,
affKey=null, evictExclude=true], hasValBytes=true],
masks=local=1|owner=1|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=1|near_local=0|removed=0|read=0,
prevVer=GridCacheVersion [topVer=145310277, order=1533830728268,
nodeOrder=1], nextVer=GridCacheVersion [topVer=145310277,
order=1533830728268, nodeOrder=1]]], rmts=null]], flags=2]]], prepared=1,
locked=false, nodeId=6ed33eb9-2103-402c-afab-a415c8f08f2f, locMapped=false,
expiryPlc=null, transferExpiryPlc=false, flags=0, partUpdateCntr=0,
serReadVer=null, xidVer=GridCacheVersion [topVer=145310277,
order=1533830728268, nodeOrder=1]], IgniteTxEntry [key=KeyCacheObjectImpl
[part=504, val=IgfsBlockKey
[fileId=c976b6f1561-689b0ba5-6920-4b52-a614-c2360d0acff4, blockId=52880,
affKey=null, evictExclude=true], hasValBytes=true], cacheId=-313790114,
txKey=IgniteTxKey [key=KeyCacheObjectImpl [part=504, val=IgfsBlockKey
[fileId=c976b6f1561-689b0ba5-6920-4b52-a614-c2360d0acff4, blockId=52880,

Re: continous query remote filter issue

2018-08-09 Thread Ilya Kasnacheev
Hello!

I have just created sample project with your code (by running two
standalone nodes with peerAssemblyLoadingMode="CurrentAppDomain" as well as
one client with project code) and it runs just fine. Both Hello World
closure and filter are executed just fine.

This is slightly puzzling to me, as I would expect cache key-value types to
NOT be peer loaded, but still, it works.

Please share reproducer project if it fails to work for you after checking
configurations.

Regards,

-- 
Ilya Kasnacheev

2018-08-09 17:58 GMT+03:00 Som Som <2av10...@gmail.com>:

> in first example i have not deployd HelloAction class manually but it
> works correctly it means that class was successfully transfered and
> deployed. but why this does not work in case of EmployeeEventFilter class?
> is it an error?
>
> чт, 9 авг. 2018 г., 15:46 Ilya Kasnacheev :
>
>> Hello!
>>
>> I'm not sure that C# has peer class loading. Are you sure that you have
>> this filter's code deployed on your server nodes?
>>
>> Regards,
>>
>> --
>> Ilya Kasnacheev
>>
>> 2018-08-09 15:23 GMT+03:00 Som Som <2av10...@gmail.com>:
>>
>>> is there any information?
>>>
>>> -- Forwarded message -
>>> From: Som Som <2av10...@gmail.com>
>>> Date: ср, 8 авг. 2018 г., 16:46
>>> Subject: continous query remote filter issue
>>> To: 
>>>
>>>
>>> hello.
>>>
>>> It looks like peerAssemblyLoadingMode flag doesn’t work correctly in
>>> case of CacheEntryEventFilter:
>>>
>>>
>>>
>>> As an example:
>>>
>>>
>>>
>>> 1)  This code works fine and I see “Hello world” on the server console.
>>> It means that HelloAction class was successfully transferred to server.
>>>
>>>
>>>
>>> class Program
>>>
>>> {
>>>
>>> static void Main(string[] args)
>>>
>>> {
>>>
>>> using (var ignite = Ignition.StartFromApplicationConfigurat
>>> ion())
>>>
>>> {
>>>
>>> var remotes = ignite.GetCluster().ForRemotes();
>>>
>>> remotes.GetCompute().Broadcast(newHelloAction());
>>>
>>> }
>>>
>>> }
>>>
>>>
>>>
>>> class HelloAction : IComputeAction
>>>
>>> {
>>>
>>> public void Invoke()
>>>
>>> {
>>>
>>> Console.WriteLine("Hello, World!");
>>>
>>> }
>>>
>>> }
>>>
>>> }
>>>
>>> 2)  But this code that sends the filter class to the remote server
>>> node generates an error and I receive 4 entries of Employee instead of
>>> 2 as expected:
>>>
>>> class Program
>>>
>>> {
>>>
>>> public class Employee
>>>
>>> {
>>>
>>> public Employee(string name, long salary)
>>>
>>> {
>>>
>>> Name = name;
>>>
>>> Salary = salary;
>>>
>>> }
>>>
>>>
>>>
>>> [QuerySqlField]
>>>
>>> public string Name { get; set; }
>>>
>>>
>>>
>>> [QuerySqlField]
>>>
>>> public long Salary { get; set; }
>>>
>>>
>>>
>>> public override string ToString()
>>>
>>> {
>>>
>>> return string.Format("{0} [name={1}, salary={2}]",
>>> typeof(Employee).Name, Name, Salary);
>>>
>>> }
>>>
>>> }
>>>
>>>
>>>
>>> class EmployeeEventListener :ICacheEntryEventListener>> Employee>
>>>
>>> {
>>>
>>> public void OnEvent(IEnumerable>> Employee>> evts)
>>>
>>> {
>>>
>>> foreach(var evt in evts)
>>>
>>> Console.WriteLine(evt.Value);
>>>
>>> }
>>>
>>> }
>>>
>>>
>>>
>>> class EmployeeEventFilter :ICacheEntryEventFilter
>>>
>>> {
>>>
>>> public bool Evaluate(ICacheEntryEvent evt)
>>>
>>> {
>>>
>>> return evt.Value.Salary > 5000;
>>>
>>> }
>>>
>>> }
>>>
>>>
>>>
>>> static void Main(string[] args)
>>>
>>> {
>>>
>>> using (var ignite = Ignition.StartFromApplicationConfigurat
>>> ion())
>>>
>>> {
>>>
>>> var employeeCache = ignite.GetOrCreateCache>> mployee>(
>>>
>>> new CacheConfiguration("employee", newQueryEntity(
>>> typeof(int), typeof(Employee))) { SqlSchema = "PUBLIC" });
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> var query = new ContinuousQuery(new
>>> EmployeeEventListener())
>>>
>>> {
>>>
>>> Filter = new EmployeeEventFilter()
>>>
>>> };
>>>
>>>
>>>
>>> var queryHandle = employeeCache.QueryContinuous(query);
>>>
>>>
>>>
>>> employeeCache.Put(1, newEmployee("James Wilson", 1000));
>>>
>>> employeeCache.Put(2, new Employee("Daniel Adams",
>>> 2000));
>>>
>>> employeeCache.Put(3, newEmployee("Cristian Moss",
>>> 7000));
>>>
>>> employeeCache.Put(4, newEmployee("Allison Mathis",
>>> 8000));
>>>
>>>
>>>
>>> Console.WriteLine("Press any key...");
>>>
>>> Console.ReadKey();

System cache's DataRegion size is configured to 40 MB.

2018-08-09 Thread Alberto Mancini
Hello,
do i miss something or the message at
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/IgniteKernal.java#L2461
is incorrect ?
I think it should refer to
DataStorageConfiguration.*systemRegionInitialSize*
and not to
DataStorageConfiguration.*systemCacheMemorySize*
I'm i wrong ?

Thanks.
   Alberto.


RE: affinity key field not recognized c++

2018-08-09 Thread Floris Van Nee
Ah thank you very much! That indeed fixes the problem. All the examples I could 
find had the full name specified there and since the classNames property were 
also full names, I never thought of changing this to the simple name. 
Apparently putting the full name there, triggers this strange behavior that the 
C++ code wants to send an updated type metadata to the Java code, causing this 
null error.

Thanks a lot again for the help!

-Floris

From: Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
Sent: Thursday 09 August 2018 4:48 PM
To: user@ignite.apache.org
Subject: Re: affinity key field not recognized c++ [External]

Hello!

I am pretty confident that affinity key configuration is supported by C++.

There is one error in your configation file: you are using Simple Mapper, but 
still specify package of class in question. This causes weird behavior on 
Ignite side, but is trivial to fix:






After that, I am able to put both MicFc and MicFc-typed BinaryObjects into 
cache, read them in C++ code using the snippet that you have specified.

There is no need of explicit support of Affinity from C++ code side in this 
case as it is handled purely from Java side.

Hope this helps,

--
Ilya Kasnacheev

2018-08-09 16:22 GMT+03:00 Floris Van Nee 
mailto:florisvan...@optiver.com>>:
Just an update.. Affinity key is indeed *not* supported in C++ at the moment. 
By digging into the C++ source I found the following..

core/src/impl/binary/binary_type_updater_impl.cpp
line 78:
rawWriter.WriteString(0); // Affinity key is not supported for now.

It just always passes in a null value for affinity key.. This obviously leads 
to the error I saw, because on the Java side it tries to merge its valid 
affinity key with the null value passed from the C++ code.

Does anyone know if it is on the planning to fix this? It is quite a vital 
thing to be able to choose a different mapping for your keys..

-Floris

From: Floris Van Nee
Sent: Thursday 09 August 2018 11:26 AM
To: user@ignite.apache.org
Subject: RE: affinity key field not recognized c++ [External]

Thanks for your reply. It is indeed the case that both Java and C++ 
configuration files are the same, and the AffinityKeyMapped annotation is not 
used in the Java declaration. Still, it is throwing an error.
I have attached here a minimal reproducing example.

My Java key class is the following:

public class MicFc implements Binarylizable, Comparable {
public String market;
public String feedcode;

@Override
public int compareTo(MicFc t) {
int m = market.compareTo(t.market);
return m == 0 ? feedcode.compareTo(t.feedcode) : m;
}

@Override
public void readBinary(BinaryReader reader) throws 
BinaryObjectException {
market = reader.readString("market");
feedcode = reader.readString("feedcode");
}

@Override
public void writeBinary(BinaryWriter writer) throws 
BinaryObjectException {
writer.writeString("market", market);
writer.writeString("feedcode", feedcode);
}
}

My configuration file (the same everywhere):































org.apache.ignite.examples.streaming.MicFc





And my C++ key class:
namespace
{
class MicFc
{
public:
std::string market, feedcode;
};
}
namespace ignite
{
namespace binary
{
IGNITE_BINARY_TYPE_START(MicFc)
IGNITE_BINARY_GET_TYPE_ID_AS_HASH(MicFc)
IGNITE_BINARY_GET_TYPE_NAME_AS_IS(MicFc)
IGNITE_BINARY_GET_FIELD_ID_AS_HASH
IGNITE_BINARY_IS_NULL_FALSE(MicFc)
IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(MicFc)

static void Write(BinaryWriter& writer, const MicFc& obj)
{
writer.WriteString("market", obj.market);
writer.WriteString("feedcode", obj.feedcode);
}

static void Read(BinaryReader& reader, MicFc& dst)
{
dst.market = reader.ReadString("market");
dst.feedcode = reader.ReadString("feedcode");
}


Re: continous query remote filter issue

2018-08-09 Thread Som Som
in first example i have not deployd HelloAction class manually but it works
correctly it means that class was successfully transfered and deployed. but
why this does not work in case of EmployeeEventFilter class? is it an error?

чт, 9 авг. 2018 г., 15:46 Ilya Kasnacheev :

> Hello!
>
> I'm not sure that C# has peer class loading. Are you sure that you have
> this filter's code deployed on your server nodes?
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-08-09 15:23 GMT+03:00 Som Som <2av10...@gmail.com>:
>
>> is there any information?
>>
>> -- Forwarded message -
>> From: Som Som <2av10...@gmail.com>
>> Date: ср, 8 авг. 2018 г., 16:46
>> Subject: continous query remote filter issue
>> To: 
>>
>>
>> hello.
>>
>> It looks like peerAssemblyLoadingMode flag doesn’t work correctly in case
>> of CacheEntryEventFilter:
>>
>>
>>
>> As an example:
>>
>>
>>
>> 1)  This code works fine and I see “Hello world” on the server console.
>> It means that HelloAction class was successfully transferred to server.
>>
>>
>>
>> class Program
>>
>> {
>>
>> static void Main(string[] args)
>>
>> {
>>
>> using (var ignite = Ignition
>> .StartFromApplicationConfiguration())
>>
>> {
>>
>> var remotes = ignite.GetCluster().ForRemotes();
>>
>> remotes.GetCompute().Broadcast(newHelloAction());
>>
>> }
>>
>> }
>>
>>
>>
>> class HelloAction : IComputeAction
>>
>> {
>>
>> public void Invoke()
>>
>> {
>>
>> Console.WriteLine("Hello, World!");
>>
>> }
>>
>> }
>>
>> }
>>
>> 2)  But this code that sends the filter class to the remote server
>> node generates an error and I receive 4 entries of Employee instead of 2
>> as expected:
>>
>> class Program
>>
>> {
>>
>> public class Employee
>>
>> {
>>
>> public Employee(string name, long salary)
>>
>> {
>>
>> Name = name;
>>
>> Salary = salary;
>>
>> }
>>
>>
>>
>> [QuerySqlField]
>>
>> public string Name { get; set; }
>>
>>
>>
>> [QuerySqlField]
>>
>> public long Salary { get; set; }
>>
>>
>>
>> public override string ToString()
>>
>> {
>>
>> return string.Format("{0} [name={1}, salary={2}]", typeof
>> (Employee).Name, Name, Salary);
>>
>> }
>>
>> }
>>
>>
>>
>> class EmployeeEventListener :ICacheEntryEventListener> Employee>
>>
>> {
>>
>> public void OnEvent(IEnumerable> Employee>> evts)
>>
>> {
>>
>> foreach(var evt in evts)
>>
>> Console.WriteLine(evt.Value);
>>
>> }
>>
>> }
>>
>>
>>
>> class EmployeeEventFilter :ICacheEntryEventFilter
>>
>> {
>>
>> public bool Evaluate(ICacheEntryEvent evt)
>>
>> {
>>
>> return evt.Value.Salary > 5000;
>>
>> }
>>
>> }
>>
>>
>>
>> static void Main(string[] args)
>>
>> {
>>
>> using (var ignite = Ignition
>> .StartFromApplicationConfiguration())
>>
>> {
>>
>> var employeeCache = ignite.GetOrCreateCache> >(
>>
>> new CacheConfiguration("employee", newQueryEntity(
>> typeof(int), typeof(Employee))) { SqlSchema = "PUBLIC" });
>>
>>
>>
>>
>>
>>
>>
>> var query = new ContinuousQuery(new
>> EmployeeEventListener())
>>
>> {
>>
>> Filter = new EmployeeEventFilter()
>>
>> };
>>
>>
>>
>> var queryHandle = employeeCache.QueryContinuous(query);
>>
>>
>>
>> employeeCache.Put(1, newEmployee("James Wilson", 1000));
>>
>> employeeCache.Put(2, new Employee("Daniel Adams", 2000));
>>
>> employeeCache.Put(3, newEmployee("Cristian Moss", 7000));
>>
>> employeeCache.Put(4, newEmployee("Allison Mathis",
>> 8000));
>>
>>
>>
>> Console.WriteLine("Press any key...");
>>
>> Console.ReadKey();
>>
>> }
>>
>> }
>>
>> }
>>
>> Server node console output:
>>
>> [16:26:33]__  
>>
>> [16:26:33]   /  _/ ___/ |/ /  _/_  __/ __/
>>
>> [16:26:33]  _/ // (7 7// /  / / / _/
>>
>> [16:26:33] /___/\___/_/|_/___/ /_/ /___/
>>
>> [16:26:33]
>>
>> [16:26:33] ver. 2.7.0.20180721#19700101-sha1:DEV
>>
>> [16:26:33] 2018 Copyright(C) Apache Software Foundation
>>
>> [16:26:33]
>>
>> [16:26:33] Ignite documentation: http://ignite.apache.org
>>
>> [16:26:33]
>>
>> [16:26:33] Quiet mode.
>>
>> [16:26:33]   ^-- Logging to file
>> 'C:\Ignite\apache-ignite-fabric-2.7.0.20180721-bin\work\log\ignite-b1061a07.0.log'
>>
>> [16:26:33]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
>>
>> [16:26:33]   ^-- To see **FULL** console log here add
>> 

Re: affinity key field not recognized c++

2018-08-09 Thread Ilya Kasnacheev
Hello!

I am pretty confident that affinity key configuration is supported by C++.

There is one error in your configation file: you are using Simple Mapper,
but still specify package of class in question. This causes weird behavior
on Ignite side, but is trivial to fix:






After that, I am able to put both MicFc and MicFc-typed BinaryObjects into
cache, read them in C++ code using the snippet that you have specified.

There is no need of explicit support of Affinity from C++ code side in this
case as it is handled purely from Java side.

Hope this helps,

-- 
Ilya Kasnacheev

2018-08-09 16:22 GMT+03:00 Floris Van Nee :

> Just an update.. Affinity key is indeed **not** supported in C++ at the
> moment. By digging into the C++ source I found the following..
>
>
>
> core/src/impl/binary/binary_type_updater_impl.cpp
>
> line 78:
>
> rawWriter.WriteString(0); // Affinity key is not supported for now.
>
>
>
> It just always passes in a null value for affinity key.. This obviously
> leads to the error I saw, because on the Java side it tries to merge its
> valid affinity key with the null value passed from the C++ code.
>
>
>
> Does anyone know if it is on the planning to fix this? It is quite a vital
> thing to be able to choose a different mapping for your keys..
>
>
>
> -Floris
>
>
>
> *From:* Floris Van Nee
> *Sent:* Thursday 09 August 2018 11:26 AM
> *To:* user@ignite.apache.org
> *Subject:* RE: affinity key field not recognized c++ [External]
>
>
>
> Thanks for your reply. It is indeed the case that both Java and C++
> configuration files are the same, and the AffinityKeyMapped annotation is
> not used in the Java declaration. Still, it is throwing an error.
>
> I have attached here a minimal reproducing example.
>
>
>
> My Java key class is the following:
>
>
>
> public class MicFc implements Binarylizable, Comparable {
>
> public String market;
>
> public String feedcode;
>
>
>
> @Override
>
> public int compareTo(MicFc t) {
>
> int m = market.compareTo(t.market);
>
> return m == 0 ? feedcode.compareTo(t.feedcode) : m;
>
> }
>
>
>
> @Override
>
> public void readBinary(BinaryReader reader) throws
> BinaryObjectException {
>
> market = reader.readString("market");
>
> feedcode = reader.readString("feedcode");
>
> }
>
>
>
> @Override
>
> public void writeBinary(BinaryWriter writer) throws
> BinaryObjectException {
>
> writer.writeString("market", market);
>
> writer.writeString("feedcode", feedcode);
>
> }
>
> }
>
>
>
> My configuration file (the same everywhere):
>
>
>
> 
>
> 
>
>  class="org.apache.ignite.cache.CacheKeyConfiguration">
>
>
> 
>
>
>
> 
>
> 
>
> 
>
> 
>
>
>
> 
>
> 
>
> 
>
>
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
> 
>
> 
>
> org.apache.ignite.
> examples.streaming.MicFc
>
> 
>
> 
>
> 
>
> 
>
>
>
> And my C++ key class:
>
> namespace
>
> {
>
> class MicFc
>
> {
>
> public:
>
> std::string market, feedcode;
>
> };
>
> }
>
> namespace ignite
>
> {
>
> namespace binary
>
> {
>
> IGNITE_BINARY_TYPE_START(MicFc)
>
> IGNITE_BINARY_GET_TYPE_ID_AS_HASH(MicFc)
>
> IGNITE_BINARY_GET_TYPE_NAME_AS_IS(MicFc)
>
> IGNITE_BINARY_GET_FIELD_ID_AS_HASH
>
> IGNITE_BINARY_IS_NULL_FALSE(MicFc)
>
> IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(MicFc)
>
>
>
> static void Write(BinaryWriter& writer, const MicFc& obj)
>
> {
>
> writer.WriteString("market", obj.market);
>
> writer.WriteString("feedcode",
> obj.feedcode);
>
> }
>
>
>
> static void Read(BinaryReader& reader, MicFc& dst)
>
> {
>
> dst.market = reader.ReadString("market");
>
> dst.feedcode =
> reader.ReadString("feedcode");
>
> }
>
>
>
> IGNITE_BINARY_TYPE_END
>
> }
>
> }
>
>
>
> To reproduce the error, first start a server with the config mentioned
> above and then run a C++ client with the same config (except for setting
>  clientMode=true) and then run the following code:
>
> MicFc mfc;
>
> mfc.market = “TEST”;
>
> mfc.feedcode=”TEST”;
>
> auto c = ignite.GetOrCreateCache("test");
>
> 

Statistics Monitoring Integrations

2018-08-09 Thread Dave Harvey
I've been able to look at cache and thread pool statistics using JVisualVM
with Mbeans support.   Has anyone found a way to get these statistics out
to a tool like NewRelic or DataDog?

Thanks,

Dave Harvey

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: Distributed closure with buffer of binary data

2018-08-09 Thread F.D.
You're right! My fault!

On Thu, Aug 9, 2018 at 2:37 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> WriteInt8Array accepts pointer and len, so I don't see why you have to
> pass char by char.
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-08-09 1:32 GMT+03:00 F.D. :
>
>> Ok, but I think it's the same like WriteArray.
>>
>> For the moment I solved in a different way,using a encode/decode
>> functions.
>>
>> Thanks,
>>
>> On Wed, Aug 8, 2018 at 11:06 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> How about WriteInt8Array
>>> 
>>> ()?
>>>
>>> Regards,
>>>
>>> --
>>> Ilya Kasnacheev
>>>
>>> 2018-08-08 11:19 GMT+03:00 F.D. :
>>>
 Hello Igniters,

 My distributed closures work perfectly when the inputs are strings, but
 when I try to pass a buffer of bytes I got an error.

 The buffer of bytes arrives to me in a std::string but when I to
 use BinaryWriter::WriteString the string is truncated (ok, it was
 predictable). The question is there a method of BinaryWriter/BinaryReader
 to handles with buffer of char? (I found WriteArray, but I've to pass char
 by char).

 Thanks,
 F.D.

>>>
>>>
>


RE: affinity key field not recognized c++

2018-08-09 Thread Floris Van Nee
Just an update.. Affinity key is indeed *not* supported in C++ at the moment. 
By digging into the C++ source I found the following..

core/src/impl/binary/binary_type_updater_impl.cpp
line 78:
rawWriter.WriteString(0); // Affinity key is not supported for now.

It just always passes in a null value for affinity key.. This obviously leads 
to the error I saw, because on the Java side it tries to merge its valid 
affinity key with the null value passed from the C++ code.

Does anyone know if it is on the planning to fix this? It is quite a vital 
thing to be able to choose a different mapping for your keys..

-Floris

From: Floris Van Nee
Sent: Thursday 09 August 2018 11:26 AM
To: user@ignite.apache.org
Subject: RE: affinity key field not recognized c++ [External]

Thanks for your reply. It is indeed the case that both Java and C++ 
configuration files are the same, and the AffinityKeyMapped annotation is not 
used in the Java declaration. Still, it is throwing an error.
I have attached here a minimal reproducing example.

My Java key class is the following:

public class MicFc implements Binarylizable, Comparable {
public String market;
public String feedcode;

@Override
public int compareTo(MicFc t) {
int m = market.compareTo(t.market);
return m == 0 ? feedcode.compareTo(t.feedcode) : m;
}

@Override
public void readBinary(BinaryReader reader) throws 
BinaryObjectException {
market = reader.readString("market");
feedcode = reader.readString("feedcode");
}

@Override
public void writeBinary(BinaryWriter writer) throws 
BinaryObjectException {
writer.writeString("market", market);
writer.writeString("feedcode", feedcode);
}
}

My configuration file (the same everywhere):































org.apache.ignite.examples.streaming.MicFc





And my C++ key class:
namespace
{
class MicFc
{
public:
std::string market, feedcode;
};
}
namespace ignite
{
namespace binary
{
IGNITE_BINARY_TYPE_START(MicFc)
IGNITE_BINARY_GET_TYPE_ID_AS_HASH(MicFc)
IGNITE_BINARY_GET_TYPE_NAME_AS_IS(MicFc)
IGNITE_BINARY_GET_FIELD_ID_AS_HASH
IGNITE_BINARY_IS_NULL_FALSE(MicFc)
IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(MicFc)

static void Write(BinaryWriter& writer, const MicFc& obj)
{
writer.WriteString("market", obj.market);
writer.WriteString("feedcode", obj.feedcode);
}

static void Read(BinaryReader& reader, MicFc& dst)
{
dst.market = reader.ReadString("market");
dst.feedcode = reader.ReadString("feedcode");
}

IGNITE_BINARY_TYPE_END
}
}

To reproduce the error, first start a server with the config mentioned above 
and then run a C++ client with the same config (except for setting  
clientMode=true) and then run the following code:
MicFc mfc;
mfc.market = “TEST”;
mfc.feedcode=”TEST”;
auto c = ignite.GetOrCreateCache("test");
auto contains = c.ContainsKey(mfc); // this line throws the error
Java exception occurred [cls=org.apache.ignite.binary.BinaryObjectException, 
msg=Binary type has different affinity key fields [typeName=MicFc, 
affKeyFieldName1=market, affKeyFieldName2=null]]

-Floris

From: Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
Sent: Thursday 09 August 2018 11:09 AM
To: user@ignite.apache.org
Subject: Re: affinity key field not recognized c++ [External]

Hello!

As far as my understanding goes, you have to supply cacheKeyConfiguration in 
both Java and C++ configuration files, and remove @AffinityKeyMapped from Java 
CustomKey class (or other ways of specifying it where applicable).

Regards,

--
Ilya Kasnacheev

2018-08-09 10:50 GMT+03:00 Floris Van Nee 
mailto:florisvan...@optiver.com>>:
Hi all,

I’m experiencing exactly the same issue as is described in a previous post on 
this mailing list: 

Re: continous query remote filter issue

2018-08-09 Thread Ilya Kasnacheev
Hello!

I'm not sure that C# has peer class loading. Are you sure that you have
this filter's code deployed on your server nodes?

Regards,

-- 
Ilya Kasnacheev

2018-08-09 15:23 GMT+03:00 Som Som <2av10...@gmail.com>:

> is there any information?
>
> -- Forwarded message -
> From: Som Som <2av10...@gmail.com>
> Date: ср, 8 авг. 2018 г., 16:46
> Subject: continous query remote filter issue
> To: 
>
>
> hello.
>
> It looks like peerAssemblyLoadingMode flag doesn’t work correctly in case
> of CacheEntryEventFilter:
>
>
>
> As an example:
>
>
>
> 1)  This code works fine and I see “Hello world” on the server console.
> It means that HelloAction class was successfully transferred to server.
>
>
>
> class Program
>
> {
>
> static void Main(string[] args)
>
> {
>
> using (var ignite = Ignition.StartFromApplicationConfigurat
> ion())
>
> {
>
> var remotes = ignite.GetCluster().ForRemotes();
>
> remotes.GetCompute().Broadcast(newHelloAction());
>
> }
>
> }
>
>
>
> class HelloAction : IComputeAction
>
> {
>
> public void Invoke()
>
> {
>
> Console.WriteLine("Hello, World!");
>
> }
>
> }
>
> }
>
> 2)  But this code that sends the filter class to the remote server
> node generates an error and I receive 4 entries of Employee instead of 2
> as expected:
>
> class Program
>
> {
>
> public class Employee
>
> {
>
> public Employee(string name, long salary)
>
> {
>
> Name = name;
>
> Salary = salary;
>
> }
>
>
>
> [QuerySqlField]
>
> public string Name { get; set; }
>
>
>
> [QuerySqlField]
>
> public long Salary { get; set; }
>
>
>
> public override string ToString()
>
> {
>
> return string.Format("{0} [name={1}, salary={2}]", typeof(
> Employee).Name, Name, Salary);
>
> }
>
> }
>
>
>
> class EmployeeEventListener :ICacheEntryEventListener Employee>
>
> {
>
> public void OnEvent(IEnumerable>
> evts)
>
> {
>
> foreach(var evt in evts)
>
> Console.WriteLine(evt.Value);
>
> }
>
> }
>
>
>
> class EmployeeEventFilter :ICacheEntryEventFilter
>
> {
>
> public bool Evaluate(ICacheEntryEvent evt)
>
> {
>
> return evt.Value.Salary > 5000;
>
> }
>
> }
>
>
>
> static void Main(string[] args)
>
> {
>
> using (var ignite = Ignition.StartFromApplicationConfigurat
> ion())
>
> {
>
> var employeeCache = ignite.GetOrCreateCache >(
>
> new CacheConfiguration("employee", newQueryEntity(
> typeof(int), typeof(Employee))) { SqlSchema = "PUBLIC" });
>
>
>
>
>
>
>
> var query = new ContinuousQuery(new
> EmployeeEventListener())
>
> {
>
> Filter = new EmployeeEventFilter()
>
> };
>
>
>
> var queryHandle = employeeCache.QueryContinuous(query);
>
>
>
> employeeCache.Put(1, newEmployee("James Wilson", 1000));
>
> employeeCache.Put(2, new Employee("Daniel Adams", 2000));
>
> employeeCache.Put(3, newEmployee("Cristian Moss", 7000));
>
> employeeCache.Put(4, newEmployee("Allison Mathis", 8000));
>
>
>
> Console.WriteLine("Press any key...");
>
> Console.ReadKey();
>
> }
>
> }
>
> }
>
> Server node console output:
>
> [16:26:33]__  
>
> [16:26:33]   /  _/ ___/ |/ /  _/_  __/ __/
>
> [16:26:33]  _/ // (7 7// /  / / / _/
>
> [16:26:33] /___/\___/_/|_/___/ /_/ /___/
>
> [16:26:33]
>
> [16:26:33] ver. 2.7.0.20180721#19700101-sha1:DEV
>
> [16:26:33] 2018 Copyright(C) Apache Software Foundation
>
> [16:26:33]
>
> [16:26:33] Ignite documentation: http://ignite.apache.org
>
> [16:26:33]
>
> [16:26:33] Quiet mode.
>
> [16:26:33]   ^-- Logging to file 'C:\Ignite\apache-ignite-
> fabric-2.7.0.20180721-bin\work\log\ignite-b1061a07.0.log'
>
> [16:26:33]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
>
> [16:26:33]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
> or "-v" to ignite.{sh|bat}
>
> [16:26:33]
>
> [16:26:33] OS: Windows Server 2016 10.0 amd64
>
> [16:26:33] VM information: Java(TM) SE Runtime Environment 1.8.0_161-b12
> Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.161-b12
>
> [16:26:33] Please set system property '-Djava.net.preferIPv4Stack=true'
> to avoid possible problems in mixed environments.
>
> [16:26:33] Configured plugins:
>
> [16:26:33]   ^-- None
>
> [16:26:33]
>
> [16:26:33] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
> 

Re: H2 performs better than ignite !!?

2018-08-09 Thread Ilya Kasnacheev
Hello!

The expectation is that distributed grid will inherently work slower on
single node than dedicated DB.
However, distributed grid will also work on multiple node, when traditional
DB wouldn't.
So extracting every last bit of single-node performance takes a back seat
when also considering multi-node performance and distributed querying.

Note that Apache Ignite also has more features than plain H2, such as data
streamers, near caching, expiry/eviction, etc, etc, and there's some
overhead.

Regards,

-- 
Ilya Kasnacheev

2018-08-09 15:39 GMT+03:00 the_palakkaran :

> Yes, I am running on a single node. Still ignite being in-memory, I
> expected
> it to perform better than H2. Is there anything I can do to make it faster?
> Like right now I have a single data region, does having multiple data
> regions give me better performance? Something like that?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: H2 performs better than ignite !!?

2018-08-09 Thread the_palakkaran
Yes, I am running on a single node. Still ignite being in-memory, I expected
it to perform better than H2. Is there anything I can do to make it faster?
Like right now I have a single data region, does having multiple data
regions give me better performance? Something like that?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Distributed closure with buffer of binary data

2018-08-09 Thread Ilya Kasnacheev
Hello!

WriteInt8Array accepts pointer and len, so I don't see why you have to pass
char by char.

Regards,

-- 
Ilya Kasnacheev

2018-08-09 1:32 GMT+03:00 F.D. :

> Ok, but I think it's the same like WriteArray.
>
> For the moment I solved in a different way,using a encode/decode functions.
>
> Thanks,
>
> On Wed, Aug 8, 2018 at 11:06 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> How about WriteInt8Array
>> 
>> ()?
>>
>> Regards,
>>
>> --
>> Ilya Kasnacheev
>>
>> 2018-08-08 11:19 GMT+03:00 F.D. :
>>
>>> Hello Igniters,
>>>
>>> My distributed closures work perfectly when the inputs are strings, but
>>> when I try to pass a buffer of bytes I got an error.
>>>
>>> The buffer of bytes arrives to me in a std::string but when I to
>>> use BinaryWriter::WriteString the string is truncated (ok, it was
>>> predictable). The question is there a method of BinaryWriter/BinaryReader
>>> to handles with buffer of char? (I found WriteArray, but I've to pass char
>>> by char).
>>>
>>> Thanks,
>>> F.D.
>>>
>>
>>


Fwd: continous query remote filter issue

2018-08-09 Thread Som Som
is there any information?

-- Forwarded message -
From: Som Som <2av10...@gmail.com>
Date: ср, 8 авг. 2018 г., 16:46
Subject: continous query remote filter issue
To: 


hello.

It looks like peerAssemblyLoadingMode flag doesn’t work correctly in case
of CacheEntryEventFilter:



As an example:



1)  This code works fine and I see “Hello world” on the server console. It
means that HelloAction class was successfully transferred to server.



class Program

{

static void Main(string[] args)

{

using (var ignite = Ignition
.StartFromApplicationConfiguration())

{

var remotes = ignite.GetCluster().ForRemotes();

remotes.GetCompute().Broadcast(newHelloAction());

}

}



class HelloAction : IComputeAction

{

public void Invoke()

{

Console.WriteLine("Hello, World!");

}

}

}

2)  But this code that sends the filter class to the remote server node
generates an error and I receive 4 entries of Employee instead of 2 as
expected:

class Program

{

public class Employee

{

public Employee(string name, long salary)

{

Name = name;

Salary = salary;

}



[QuerySqlField]

public string Name { get; set; }



[QuerySqlField]

public long Salary { get; set; }



public override string ToString()

{

return string.Format("{0} [name={1}, salary={2}]", typeof(
Employee).Name, Name, Salary);

}

}



class EmployeeEventListener :ICacheEntryEventListener

{

public void OnEvent(IEnumerable>
evts)

{

foreach(var evt in evts)

Console.WriteLine(evt.Value);

}

}



class EmployeeEventFilter :ICacheEntryEventFilter

{

public bool Evaluate(ICacheEntryEvent evt)

{

return evt.Value.Salary > 5000;

}

}



static void Main(string[] args)

{

using (var ignite = Ignition
.StartFromApplicationConfiguration())

{

var employeeCache = ignite.GetOrCreateCache(

new CacheConfiguration("employee", newQueryEntity(typeof
(int), typeof(Employee))) { SqlSchema = "PUBLIC" });







var query = new ContinuousQuery(new
EmployeeEventListener())

{

Filter = new EmployeeEventFilter()

};



var queryHandle = employeeCache.QueryContinuous(query);



employeeCache.Put(1, newEmployee("James Wilson", 1000));

employeeCache.Put(2, new Employee("Daniel Adams", 2000));

employeeCache.Put(3, newEmployee("Cristian Moss", 7000));

employeeCache.Put(4, newEmployee("Allison Mathis", 8000));



Console.WriteLine("Press any key...");

Console.ReadKey();

}

}

}

Server node console output:

[16:26:33]__  

[16:26:33]   /  _/ ___/ |/ /  _/_  __/ __/

[16:26:33]  _/ // (7 7// /  / / / _/

[16:26:33] /___/\___/_/|_/___/ /_/ /___/

[16:26:33]

[16:26:33] ver. 2.7.0.20180721#19700101-sha1:DEV

[16:26:33] 2018 Copyright(C) Apache Software Foundation

[16:26:33]

[16:26:33] Ignite documentation: http://ignite.apache.org

[16:26:33]

[16:26:33] Quiet mode.

[16:26:33]   ^-- Logging to file
'C:\Ignite\apache-ignite-fabric-2.7.0.20180721-bin\work\log\ignite-b1061a07.0.log'

[16:26:33]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'

[16:26:33]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}

[16:26:33]

[16:26:33] OS: Windows Server 2016 10.0 amd64

[16:26:33] VM information: Java(TM) SE Runtime Environment 1.8.0_161-b12
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.161-b12

[16:26:33] Please set system property '-Djava.net.preferIPv4Stack=true' to
avoid possible problems in mixed environments.

[16:26:33] Configured plugins:

[16:26:33]   ^-- None

[16:26:33]

[16:26:33] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0]]

[16:26:33] Message queue limit is set to 0 which may lead to potential
OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due
to message queues growth on sender and receiver sides.

[16:26:33] Security status [authentication=off, tls/ssl=off]

[16:26:35] Performance suggestions for grid  (fix if possible)

[16:26:35] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true

[16:26:35]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)

[16:26:35]   ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]'
to JVM options)

[16:26:35]   

Re: Web console project upload

2018-08-09 Thread Alexey Kuznetsov
Orel,

Could you share command that you used to deploy Web Console?

If in that command data folder was configured to some folder of you local
file system, it means that all data can be restored.
Otherwise, data was inside docker image and I'm afraid it was also deleted.

Also you may try to restore docker container if it was not purged, see for
example here
https://stackoverflow.com/questions/29202787/restore-deleted-container-docker


Alexey Kuznetsov


On Thu, Aug 9, 2018 at 6:26 PM Orel Weinstock (ExposeBox) <
o...@exposebox.com> wrote:

> Hi,
>
> I've built up a project on a web console deployed via docker and
> accidentally deleted the image. This resulted in the whole project
> disappearing server side (obviously). The problem is I can't upload the
> project I have so that I can continue editing it on the Web Console. Is
> this something that is "in the works" or should I just back up the
> /var/lib/mongo folder somewhere?
>
> --
>
> --
> *Orel Weinstock*
> Software Engineer
> Email:o...@exposebox.com 
> Website: www.exposebox.com
>
>


Re: Hibernate L2 cache with Ignite. How?

2018-08-09 Thread ezhuravlev
Hi,

Are you sure that you access the same cache that is used for l2? Try to
invoke Ignite.cacheNames() - it will give you a list of caches in the
cluster, using that, you can check that use right cache.

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Web console project upload

2018-08-09 Thread Orel Weinstock (ExposeBox)
Hi,

I've built up a project on a web console deployed via docker and
accidentally deleted the image. This resulted in the whole project
disappearing server side (obviously). The problem is I can't upload the
project I have so that I can continue editing it on the Web Console. Is
this something that is "in the works" or should I just back up the
/var/lib/mongo folder somewhere?

-- 

-- 
*Orel Weinstock*
Software Engineer
Email:o...@exposebox.com 
Website: www.exposebox.com


Re: H2 performs better than ignite !!?

2018-08-09 Thread Mikael

Hi!

H2 is optimized for what it does and it's data format, only the H2 
parser is used in Ignite, if you run a single node you should expect it 
to be a bit slower compared to H2, are you running a single node ?


It's only if you have multiple nodes you can expect it to be faster 
(depending of course on the data and where it is located and so on).


Mikael


Den 2018-08-09 kl. 12:41, skrev the_palakkaran:

Hi,

When I tried running few db lookup scenarios in H2 and ignite, I got better
performance for H2. All were basic db operations like =, <=, >=.

These were the figures for 2million records(2 lookups for one record)

H2 : 3 minutes

Ignite (cache gets based on keys) : 3.4 minutes

Ignite (SQLFields query based) : 5 minutes

Below is my sample cache config:

 cacheConfig = new CacheConfiguration<>("paramControlCache");
 cacheConfig.setIndexedTypes(Key.class, Model.class;
 cacheConfig.setReadThrough(false);
 cacheConfig.setAtomicityMode(ATOMIC);
 cacheConfig.setCacheMode(CacheMode.PARTITIONED);
 cacheConfig factory = new CacheCacheFactory();
 cacheConfig.setCacheStoreFactory(factory);
 cacheConfig.setDataRegionName("CacheDataRegion");
 cacheConfig.setOnheapCacheEnabled(true);

How does this happen? Is there something I am missing out ??





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/






H2 performs better than ignite !!?

2018-08-09 Thread the_palakkaran
Hi,

When I tried running few db lookup scenarios in H2 and ignite, I got better
performance for H2. All were basic db operations like =, <=, >=.

These were the figures for 2million records(2 lookups for one record)

H2 : 3 minutes

Ignite (cache gets based on keys) : 3.4 minutes

Ignite (SQLFields query based) : 5 minutes

Below is my sample cache config:

cacheConfig = new CacheConfiguration<>("paramControlCache");
cacheConfig.setIndexedTypes(Key.class, Model.class;
cacheConfig.setReadThrough(false);
cacheConfig.setAtomicityMode(ATOMIC);
cacheConfig.setCacheMode(CacheMode.PARTITIONED);
cacheConfig factory = new CacheCacheFactory();
cacheConfig.setCacheStoreFactory(factory);
cacheConfig.setDataRegionName("CacheDataRegion");
cacheConfig.setOnheapCacheEnabled(true);

How does this happen? Is there something I am missing out ??





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Any mbean call to get all deployed services in a grid

2018-08-09 Thread Calvin KL Wong, CLSA
Hi,

Is there any mbean to get all deployed services in a grid (or in a node)?

Thanks,
Calvin


Calvin KL Wong
Sr. Lead Engineer, Execution Services
D  +852 2600 7983  |  M  +852 9267 9471  |  T  +852 2600 
5/F, One Island East, 18 Westlands Road, Island East, Hong Kong

[:1. Social Media Icons:CLSA_Social Media 
Icons_linkedin.png][:1. Social Media 
Icons:CLSA_Social Media 
Icons_twitter.png][:1. Social Media 
Icons:CLSA_Social Media 
Icons_youtube.png][:1.
 Social Media Icons:CLSA_Social Media 
Icons_facebook.png]

clsa.com
Insights. Liquidity. Capital.

[CLSA_RGB]

A CITIC Securities Company

The content of this communication is intended for the recipient and is subject 
to CLSA Legal and Regulatory Notices.
These can be viewed at https://www.clsa.com/disclaimer.html or sent to you upon 
request.
Please consider before printing. CLSA is ISO14001 certified and committed to 
reducing its impact on the environment.


RE: affinity key field not recognized c++

2018-08-09 Thread Floris Van Nee
Thanks for your reply. It is indeed the case that both Java and C++ 
configuration files are the same, and the AffinityKeyMapped annotation is not 
used in the Java declaration. Still, it is throwing an error.
I have attached here a minimal reproducing example.

My Java key class is the following:

public class MicFc implements Binarylizable, Comparable {
public String market;
public String feedcode;

@Override
public int compareTo(MicFc t) {
int m = market.compareTo(t.market);
return m == 0 ? feedcode.compareTo(t.feedcode) : m;
}

@Override
public void readBinary(BinaryReader reader) throws 
BinaryObjectException {
market = reader.readString("market");
feedcode = reader.readString("feedcode");
}

@Override
public void writeBinary(BinaryWriter writer) throws 
BinaryObjectException {
writer.writeString("market", market);
writer.writeString("feedcode", feedcode);
}
}

My configuration file (the same everywhere):































org.apache.ignite.examples.streaming.MicFc





And my C++ key class:
namespace
{
class MicFc
{
public:
std::string market, feedcode;
};
}
namespace ignite
{
namespace binary
{
IGNITE_BINARY_TYPE_START(MicFc)
IGNITE_BINARY_GET_TYPE_ID_AS_HASH(MicFc)
IGNITE_BINARY_GET_TYPE_NAME_AS_IS(MicFc)
IGNITE_BINARY_GET_FIELD_ID_AS_HASH
IGNITE_BINARY_IS_NULL_FALSE(MicFc)
IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(MicFc)

static void Write(BinaryWriter& writer, const MicFc& obj)
{
writer.WriteString("market", obj.market);
writer.WriteString("feedcode", obj.feedcode);
}

static void Read(BinaryReader& reader, MicFc& dst)
{
dst.market = reader.ReadString("market");
dst.feedcode = reader.ReadString("feedcode");
}

IGNITE_BINARY_TYPE_END
}
}

To reproduce the error, first start a server with the config mentioned above 
and then run a C++ client with the same config (except for setting  
clientMode=true) and then run the following code:
MicFc mfc;
mfc.market = “TEST”;
mfc.feedcode=”TEST”;
auto c = ignite.GetOrCreateCache("test");
auto contains = c.ContainsKey(mfc); // this line throws the error
Java exception occurred [cls=org.apache.ignite.binary.BinaryObjectException, 
msg=Binary type has different affinity key fields [typeName=MicFc, 
affKeyFieldName1=market, affKeyFieldName2=null]]

-Floris

From: Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
Sent: Thursday 09 August 2018 11:09 AM
To: user@ignite.apache.org
Subject: Re: affinity key field not recognized c++ [External]

Hello!

As far as my understanding goes, you have to supply cacheKeyConfiguration in 
both Java and C++ configuration files, and remove @AffinityKeyMapped from Java 
CustomKey class (or other ways of specifying it where applicable).

Regards,

--
Ilya Kasnacheev

2018-08-09 10:50 GMT+03:00 Floris Van Nee 
mailto:florisvan...@optiver.com>>:
Hi all,

I’m experiencing exactly the same issue as is described in a previous post on 
this mailing list: 
http://apache-ignite-users.70518.x6.nabble.com/Affinity-Key-field-is-not-identified-if-binary-configuration-is-used-on-cache-key-object-td15959.html
In short – defining an XML config with the appropriate binaryConfiguration (for 
Java/C++ interopability) and cacheKeyConfiguration (to define an 
affinityKeyFieldName for a certain key type) will fail when running from C++. 
Unfortunately, the earlier item on the mailing list didn’t find/post a solution 
to the problem.

My custom key type is a class with two String members. I get the following 
error when I try to retrieve something from the cache:

An error occurred: Java exception occurred 
[cls=org.apache.ignite.binary.BinaryObjectException, msg=Binary type has 
different affinity key fields [typeName=CustomKey, affKeyFieldName1=string_1, 
affKeyFieldName2=null]]

Running exactly the same from Java works fine. Also, when I 

Re: affinity key field not recognized c++

2018-08-09 Thread Ilya Kasnacheev
Hello!

As far as my understanding goes, you have to supply cacheKeyConfiguration
in both Java and C++ configuration files, and remove @AffinityKeyMapped
from Java CustomKey class (or other ways of specifying it where applicable).

Regards,

-- 
Ilya Kasnacheev

2018-08-09 10:50 GMT+03:00 Floris Van Nee :

> Hi all,
>
>
>
> I’m experiencing exactly the same issue as is described in a previous post
> on this mailing list: http://apache-ignite-users.
> 70518.x6.nabble.com/Affinity-Key-field-is-not-identified-
> if-binary-configuration-is-used-on-cache-key-object-td15959.html
>
> In short – defining an XML config with the appropriate binaryConfiguration
> (for Java/C++ interopability) and cacheKeyConfiguration (to define an
> affinityKeyFieldName for a certain key type) will fail when running from
> C++. Unfortunately, the earlier item on the mailing list didn’t find/post a
> solution to the problem.
>
>
>
> My custom key type is a class with two String members. I get the following
> error when I try to retrieve something from the cache:
>
>
>
> An error occurred: Java exception occurred 
> [cls=org.apache.ignite.binary.BinaryObjectException,
> msg=Binary type has different affinity key fields [typeName=CustomKey,
> affKeyFieldName1=string_1, affKeyFieldName2=null]]
>
>
>
> Running exactly the same from Java works fine. Also, when I remove the
> cacheKeyConfiguration part from the XML, it runs fine in both Java and C++
> (but then this runs without the proper affinity key field of course).
>
>
>
> It seems like this is a bug, or am I missing something?
>
>
>
> -Floris
>
>
>


When using CacheMode.LOCAL, OOM

2018-08-09 Thread NO
Hi,
I'm going to use Ignite instead of guava. During testing, I find that 
CacheMode.LOCAL is often OOM. Please help me check what's wrong with it. Thank 
you very much.



Ignite version : 2.6.0


jdk 1.8.0_151-b12


Test Code :
==
public class LocalCacheDemo {
public static void main(String[] args) throws InterruptedException {

String cacheName = "localCache";
String regionName = "localRegin";

DataStorageConfiguration dataStorageConfiguration = new 
DataStorageConfiguration();

DataRegionConfiguration localReginConf = new DataRegionConfiguration();
localReginConf.setName(regionName);
localReginConf.setMetricsEnabled(true);
localReginConf.setPersistenceEnabled(false);
localReginConf.setInitialSize(256 * 1024 * 1024);
long max = 2L * 1024L * 1024L * 1024L;
localReginConf.setMaxSize(max);
localReginConf.setPageEvictionMode(DataPageEvictionMode.RANDOM_2_LRU);
dataStorageConfiguration.setDataRegionConfigurations(localReginConf);
dataStorageConfiguration.setPageSize(1024 * 8);

CacheConfiguration cacheCfg = new CacheConfiguration(cacheName);
cacheCfg.setCacheMode(CacheMode.LOCAL);
cacheCfg.setDataRegionName(regionName);
cacheCfg.setBackups(0);
cacheCfg.setStatisticsEnabled(true);
cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheCfg.setOnheapCacheEnabled(false);

IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setCacheConfiguration(cacheCfg);
cfg.setDataStorageConfiguration(dataStorageConfiguration);

Ignite ignite = Ignition.start(cfg);

final ClusterGroup clusterGroup = ignite.cluster().forLocal();

final IgniteCache cache = 
ignite.getOrCreateCache(cacheName);

final ExecutorService pool = Executors.newFixedThreadPool(8);

for (int i = 0; i < 8; i++) {
pool.submit(new Runnable() {
@Override
public void run() {
while (true) {
cache.put(UUID.randomUUID().toString(), 
UUID.randomUUID().toString().getBytes());
}
}
});
}
}
}

==
JVM param??


 -server -XX:+UseG1GC -XX:+AlwaysPreTouch -XX:+ScavengeBeforeFullGC -Xmx2g 
-Xms2g -XX:MaxMetaspaceSize=100m -XX:MaxGCPauseMillis=100 -XX:+PrintGCDetails 
-XX:+PrintTenuringDistribution -XX:+PrintGCDateStamps 
-XX:+PrintGCApplicationStoppedTime -XX:NumberOfGCLogFiles=1 
-XX:GCLogFileSize=256M -Xloggc:/home/qipu/production/localCache/gc.log 

===
===
GC log??


2018-08-09T16:36:59.630+0800: 750.387: Total time for which application threads 
were stopped: 0.0104034 seconds
2018-08-09T16:36:59.630+0800: 750.387: [GC pause (G1 Evacuation Pause) (young) 
(initial-mark)
Desired survivor size 6815744 bytes, new threshold 15 (max 15)
, 0.0041734 secs]
   [Parallel Time: 3.1 ms, GC Workers: 4]
  [GC Worker Start (ms): Min: 750387.6, Avg: 750387.7, Max: 750387.8, Diff: 
0.1]
  [Ext Root Scanning (ms): Min: 0.3, Avg: 0.5, Max: 0.9, Diff: 0.6, Sum: 
2.1]
  [Code Root Marking (ms): Min: 0.2, Avg: 0.6, Max: 1.5, Diff: 1.3, Sum: 
2.6]
  [Update RS (ms): Min: 0.6, Avg: 1.4, Max: 1.8, Diff: 1.2, Sum: 5.6]
 [Processed Buffers: Min: 2, Avg: 3.5, Max: 5, Diff: 3, Sum: 14]
  [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
  [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 
0.0]
  [Object Copy (ms): Min: 0.1, Avg: 0.3, Max: 0.4, Diff: 0.3, Sum: 1.3]
  [Termination (ms): Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.2, Sum: 0.4]
  [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
  [GC Worker Total (ms): Min: 2.9, Avg: 3.0, Max: 3.0, Diff: 0.1, Sum: 11.9]
  [GC Worker End (ms): Min: 750390.7, Avg: 750390.7, Max: 750390.7, Diff: 
0.0]
   [Code Root Fixup: 0.1 ms]
   [Code Root Migration: 0.0 ms]
   [Clear CT: 0.1 ms]
   [Other: 0.9 ms]
  [Choose CSet: 0.0 ms]
  [Ref Proc: 0.2 ms]
  [Ref Enq: 0.0 ms]
  [Free CSet: 0.0 ms]
   [Eden: 0.0B(102.0M)->0.0B(102.0M) Survivors: 0.0B->0.0B Heap: 
2046.5M(2048.0M)->2046.5M(2048.0M)]
 [Times: user=0.01 sys=0.00, real=0.00 secs] 
2018-08-09T16:36:59.635+0800: 750.392: Total time for which application threads 
were stopped: 0.0045325 seconds
2018-08-09T16:36:59.635+0800: 750.392: [GC concurrent-root-region-scan-start]
2018-08-09T16:36:59.635+0800: 750.392: [GC concurrent-root-region-scan-end, 
0.078 secs]
2018-08-09T16:36:59.635+0800: 750.392: [GC concurrent-mark-start]
2018-08-09T16:36:59.635+0800: 750.392: [Full GC (Allocation Failure) 

===

affinity key field not recognized c++

2018-08-09 Thread Floris Van Nee
Hi all,

I'm experiencing exactly the same issue as is described in a previous post on 
this mailing list: 
http://apache-ignite-users.70518.x6.nabble.com/Affinity-Key-field-is-not-identified-if-binary-configuration-is-used-on-cache-key-object-td15959.html
In short - defining an XML config with the appropriate binaryConfiguration (for 
Java/C++ interopability) and cacheKeyConfiguration (to define an 
affinityKeyFieldName for a certain key type) will fail when running from C++. 
Unfortunately, the earlier item on the mailing list didn't find/post a solution 
to the problem.

My custom key type is a class with two String members. I get the following 
error when I try to retrieve something from the cache:

An error occurred: Java exception occurred 
[cls=org.apache.ignite.binary.BinaryObjectException, msg=Binary type has 
different affinity key fields [typeName=CustomKey, affKeyFieldName1=string_1, 
affKeyFieldName2=null]]

Running exactly the same from Java works fine. Also, when I remove the 
cacheKeyConfiguration part from the XML, it runs fine in both Java and C++ (but 
then this runs without the proper affinity key field of course).

It seems like this is a bug, or am I missing something?

-Floris



Re: Partitions distribution across nodes

2018-08-09 Thread dkarachentsev
Hi Akash,

1) Actually exchange is a short-time process when nodes remap partitions.
But Ignite uses late affinity assignment, that means affinity distribution
will be switched after rebalance completed. In other words after rebalance
it will atomically switch partition distribution.
But you don't have to wait when rebalance finish, because it works
asynchronously.

2) I think, simpler would be to use IgniteCluster to determine number of
nodes [1]:
Ignite ignite = Ignition.start("examples/config/example-ignite.xml");

if (ignite.cluster().forServers().nodes().size() == 4) {
//... loadCache
}
3) No, you can use some custom value in cache with putIfAsent() to
atomically get if some action was performed. 

[1] https://apacheignite.readme.io/docs/cluster-groups

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/