Null Pointer Error in GridDhtPartitionsExchangeFuture

2020-02-12 Thread wentat
Hi all, I am evaluating Ignite 2.7 failover scenarios. We are testing 3
different scenarios:
1. Swap rebalance - kill a node, then add a new node in
2. Scale up - add a new node in
3. Scale down - kill a node

I have a cluster with 30 nodes, with a huge dataset of 450 million items.

Test 1

In scenario 1: 
I started node 31 and killed node 1. Node 31 was not in the base topology
but they share the same XML file so the cluster detected it. I then used
control.sh --baseline remove node1 which is offline and added node 31 which
is outside of the original topology. This step works fine

In scenario 2:
I started node 1 and added back to the cluster via the steps above, then
suddenly 3 other nodes in the cluster crashed. The reasoning could be
because of me not removing the old work directory in node 1. Anyways the
results I got from the crashed servers are:

```
java.lang.NullPointerException
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.cacheGroupAddedOnExchange(GridDhtPartitionsExchangeFuture.java:492)
at
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$14.applyx(CacheAffinitySharedManager.java:1598)
at
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$14.applyx(CacheAffinitySharedManager.java:1590)
at
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.forAllRegisteredCacheGroups(CacheAffinitySharedManager.java:1206)
at
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onReassignmentEnforced(CacheAffinitySharedManager.java:1590)
at
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onServerLeftWithExchangeMergeProtocol(CacheAffinitySharedManager.java:1546)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3239)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3191)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onBecomeCoordinator(GridDhtPartitionsExchangeFuture.java:4559)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.access$3500(GridDhtPartitionsExchangeFuture.java:139)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$9$1$1.apply(GridDhtPartitionsExchangeFuture.java:4331)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$9$1$1.apply(GridDhtPartitionsExchangeFuture.java:4320)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:385)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:355)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$9$1.call(GridDhtPartitionsExchangeFuture.java:4320)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$9$1.call(GridDhtPartitionsExchangeFuture.java:4316)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6816)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
```

and

```
class org.apache.ignite.internal.cluster.ClusterTopologyCheckedException:
Failed to send message (node left topology): TcpDiscoveryNode
[id=c6cd8563-ca40-4563-8dc0-4626c0c8111e,
addrs=[100.74.26.173, 127.0.0.1], sockAddrs=[/127.0.0.1:47500,
someip:47500],
discPort=47500, order=12, intOrder=12, lastExchangeTime=1581324395969,
loc=false, ver=2.7.0#20181201-sha1:256ae401, isClient=false]
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3270)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2987)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2870)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2713)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2672)
at
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1656)
at
org.apache.ignite.internal.managers.communication.GridIoManager.sendOrderedMessage(GridIoManager.java:1766)
at
org.

Re: Dynamic Cache Change not allowed

2020-02-12 Thread nithin91
Hi 

No I am creating new cache and configuring the cache from java in client
mode and trying to load the cache
from java in client mode.

Following is error i get.

Feb 13, 2020 11:34:40 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to send message: null
java.io.IOException: Failed to get acknowledge for message:
TcpDiscoveryClientMetricsUpdateMessage [super=TcpDiscoveryAbstractMessage
[sndNodeId=null, id=b9bb52d3071-613fd9b8-0c00-4dde-ba8f-8f5341734a3c,
verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
isClient=true]]
at
org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketWriter.body(ClientImpl.java:1398)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)

Feb 13, 2020 11:34:47 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to reconnect to cluster (consider increasing 'networkTimeout'
configuration property) [networkTimeout=5000]
[11:34:52] Ignite node stopped OK [uptime=00:00:24.772]
Exception in thread "main" javax.cache.CacheException: class
org.apache.ignite.IgniteClientDisconnectedException: Failed to execute
dynamic cache change request, client node disconnected.
at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)
at
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:3023)
at
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2992)
at Load.OrdersLoad.main(OrdersLoad.java:82)
Caused by: class org.apache.ignite.IgniteClientDisconnectedException: Failed
to execute dynamic cache change request, client node disconnected.
at
org.apache.ignite.internal.util.IgniteUtils$15.apply(IgniteUtils.java:952)
at
org.apache.ignite.internal.util.IgniteUtils$15.apply(IgniteUtils.java:948)
... 4 more
Caused by: class
org.apache.ignite.internal.IgniteClientDisconnectedCheckedException: Failed
to execute dynamic cache change request, client node disconnected.
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onDisconnected(GridCacheProcessor.java:1180)
at
org.apache.ignite.internal.IgniteKernal.onDisconnected(IgniteKernal.java:3949)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:821)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.lambda$onDiscovery$0(GridDiscoveryManager.java:604)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body0(GridDiscoveryManager.java:2667)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body(GridDiscoveryManager.java:2705)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Cluster and Kubernetes Cluster

2020-02-12 Thread pg31
Yes. You should deploy them in a different namespace. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Nodes started on local machine require more than 80% of physical RAM

2020-02-12 Thread Stéphane Thibaud
Thank you. I am running this on a Google Cloud instance with Container OS
and was trying to find a minimal setup first. Maybe I need to add some more
RAM to the instance, but 4GB for container OS seems a bit much. Thanks for
letting me know in any case! :-)


Kind regards,

Stéphane

2020年2月13日(木) 7:55 Mikhail :

> Hi Stephane,
>
> Ignite has a check: it sums all JVM heaps and off-heap sizes for all nodes
> in the cluster that you run on the same host,  and check if it takes more
> then 80% of host RAM OR *there's less then 4GB left*:
>
>
> https://github.com/apache/ignite/blob/ef6764e99c318a857ec92d6a270d8ef621cfde66/modules/core/src/main/java/org/apache/ignite/internal/IgniteKernal.java#L1753
>
> So your system just have only 1.5GB, so you always will see this message,
> while "required" size less than "available" and there's still some space
> for
> OS and your tools you can ignore this message. Plus, even if you will
> require more space then your RAM capacity, I believe you use SWAP, don't
> you? so it's still okay to run Ignite, but please, remember that you can
> have poor performance due to lack of RAM.
>
> Thanks,
> Mike.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: sql insert, but key is null

2020-02-12 Thread Evgenii Zhuravlev
Yes, you're right.

вт, 11 февр. 2020 г. в 17:03, Edward Chen :

> No, PersonKey doesn't have any annotation.
>
> Do you mean , I need add @QuerySqlField to PersonKey and remove those
> fields from Person class ?
>
>
> On 2/11/2020 7:23 PM, Evgenii Zhuravlev wrote:
>
> You have another class - PersonKey - do you have annotations there?
>
> Ignite has a key-value storage, so, Person object shouldn't have these key
> fields.
>
> вт, 11 февр. 2020 г. в 16:14, Edward Chen :
>
>> Yes, all of them defined in Person
>>
>>
>> On 2/11/2020 6:29 PM, Evgenii Zhuravlev wrote:
>>
>> Did you add it to all fields in both key and value?
>>
>> Evgenii
>>
>> вт, 11 февр. 2020 г. в 15:18, Edward Chen :
>>
>>> I just add @QuerySqlField to java field.
>>>
>>> Does Ignite have annotation for Primary Key ?
>>>
>>>
>>> Evgenii
>>>
>>> вт, 11 февр. 2020 г. в 13:59, Edward Chen :
>>>
 Hello,

 I am using Ignite 2.7.6 and testing its SQL insert function. I have
 these codes:


 PersonKey {
 id: Long;
 type: String;
 // constructor, getter, setter 
 // hashCode, toString ...
 }

 Person {
 id: Long;
 type: String;
 name: String;
 zip: String;

 public PersonKey getKey() {return new PersonKey(...);}

 // constructor, getter, setter 
 // hashCode, toString ...
 }

 insert sql: "insert into Person(id, type, name, zip) values (100, "S",
 "John", "11223")

 when get data back from Cache,
 Iterator<..> iter = cache.iterator();
 while(iter.hasNext()){
   Cache.Entry entry = iter.next();

   entry.getKey --> *0,null *
 }

 The last output is not correct, it should be *"100, S"* .

 Any inputs please ?

 Thanks

>>>
>>
>


Re: Nodes started on local machine require more than 80% of physical RAM

2020-02-12 Thread Mikhail
Hi Stephane,

Ignite has a check: it sums all JVM heaps and off-heap sizes for all nodes
in the cluster that you run on the same host,  and check if it takes more
then 80% of host RAM OR *there's less then 4GB left*:

https://github.com/apache/ignite/blob/ef6764e99c318a857ec92d6a270d8ef621cfde66/modules/core/src/main/java/org/apache/ignite/internal/IgniteKernal.java#L1753

So your system just have only 1.5GB, so you always will see this message,
while "required" size less than "available" and there's still some space for
OS and your tools you can ignore this message. Plus, even if you will
require more space then your RAM capacity, I believe you use SWAP, don't
you? so it's still okay to run Ignite, but please, remember that you can
have poor performance due to lack of RAM.

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Dynamic Cache Change not allowed

2020-02-12 Thread Evgenii Zhuravlev
Hi,

Are you trying to recreate the same cache from java, but with another
configuration? Can you share the full exception with a stack trace?

Evgenii

ср, 12 февр. 2020 г. в 00:04, nithin91 <
nithinbharadwaj.govindar...@franklintempleton.com>:

> Forgot to attach the bean file.Attached the bean File now.  config.txt
> 
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Nodes started on local machine require more than 80% of physical RAM

2020-02-12 Thread Denis Magda
Hi Stéphane,

This message relates to Ignite off-heap memory. You need to adjust the size
for its data regions. Hope this page helps:
https://apacheignite.readme.io/docs/memory-configuration#section-data-regions

-
Denis


On Wed, Feb 12, 2020 at 12:07 AM Stéphane Thibaud 
wrote:

> Hello!
>
> I am deploying my first Ignite node, but I encountered the following
> message
> on start:
>
> [07:42:50] Nodes started on local machine require more than 80% of physical
> RAM what can lead to significant slowdown due to swapping (please decrease
> JVM heap size, data region size or checkpoint buffer size) [required=950MB,
> available=1692MB]
>
> I did apply the followings tweaks to the Java VM -Xms512m and Xmx512m and
> it
> had an effect in that the 'required' part in the message above went down.
> However, required/available < 0.8 now and I don't know why the message
> still
> appears. Any ideas?
>
>
> Kind regards,
>
> Stéphane
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Loading and Fetching the Data using Node js.

2020-02-12 Thread Denis Magda
Hello,

Have you tried to create Node.JS classes with a structure similar to Java
objects and use them in put/get methods?
https://apacheignite.readme.io/docs/nodejs-thin-client-binary-types

As for the initial data loading, an efficient API has not been added to the
Node.JS yet. Use Java IgniteDataStreamer or CacheStore.load to load records
to Ignite.

-
Denis


On Tue, Feb 11, 2020 at 11:50 PM nithin91 <
nithinbharadwaj.govindar...@franklintempleton.com> wrote:

> Hi
>
> I am new to Apache Ignite.Can someone help me on how to fetch and load the
> data from ignite cache
> using node js without using sql field queries option. Cache is loaded using
> CacheJDBCPOJO Store and the Key and Value types are custom types defined
> using JAVA.As these classes are defined in Java not sure on how to fetch
> the
> data using node.
>
> Hope the following example , explains the issue better.
>
> We have ignite cache of custom key Type i.e Person Key with attributes
> Person First Name and person Last Name and custom value type  i.e Person
> Info with attributes Person Address and Person Age etc.
>  These classes are defined in Java and the caches are configured in Bean
> File and loaded using CacheJDBCPOJO Store.
>
> As these classes will not be available in node js, how can we load /fetch
> the data from node js using cahe.put /cache.get.Tried creating similar
> classes in node and pass the object of these classes to
> cahe.put /cache.get but it is in't working.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Slow cache updates with indexing module enabled

2020-02-12 Thread xero
Hi,
We are experiencing slow updates to a cache with multiple indexed fields
(around 25 indexes during testing but we expect to have many more) for
updates that are only changing one field. Basically, we have a
customer*->belongsto->*segment relationship and we have one column per
segment. Only one column is updated with a 1 or 0 if the customer belongs to
the segment. 

During testing, we tried dropping half of the unrelated indexes (indexes
over fields that are not being updated) and we duplicate the performance. We
went from 1k ops to 2k ops approximately.

We found these cases may be related:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-19%3A+SQL+index+update+optimizations
https://issues.apache.org/jira/browse/IGNITE-7015?src=confmacro

Could you please confirm us if IGNITE-7015 could be related to this
scenario? If yes, do you have any plans to continue the development of the
fix?


We are using Ignite 2.7.6 with 10 nodes, 2 backups, indexing module enabled
and persistence.

Cache Configuration: [name=xdp-contactcomcast-1, grpName=null,
memPlcName=xdp, storeConcurrentLoadAllThreshold=5, rebalancePoolSize=2,
rebalanceTimeout=1, evictPlc=null, evictPlcFactory=null,
onheapCache=false, sqlOnheapCache=false, sqlOnheapCacheMaxSize=0,
evictFilter=null, eagerTtl=true, dfltLockTimeout=0, nearCfg=null,
writeSync=PRIMARY_SYNC, storeFactory=null, storeKeepBinary=false,
loadPrevVal=false, aff=RendezvousAffinityFunction [parts=1024, mask=1023,
exclNeighbors=false, exclNeighborsWarn=false, backupFilter=null,
affinityBackupFilter=null], cacheMode=PARTITIONED, atomicityMode=ATOMIC,
backups=2, invalidate=false, tmLookupClsName=null, rebalanceMode=ASYNC,
rebalanceOrder=0, rebalanceBatchSize=524288, rebalanceBatchesPrefetchCnt=2,
maxConcurrentAsyncOps=500, sqlIdxMaxInlineSize=-1, writeBehindEnabled=false,
writeBehindFlushSize=10240, writeBehindFlushFreq=5000,
writeBehindFlushThreadCnt=1, writeBehindBatchSize=512,
writeBehindCoalescing=true, maxQryIterCnt=1024,
affMapper=org.apache.ignite.internal.processors.cache.CacheDefaultBinaryAffinityKeyMapper@db5e319,
rebalanceDelay=0, rebalanceThrottle=0, interceptor=null,
longQryWarnTimeout=3000, qryDetailMetricsSz=0, readFromBackup=true,
nodeFilter=IgniteAllNodesPredicate [], sqlSchema=XDP_CONTACTCOMCAST_1,
sqlEscapeAll=false, cpOnRead=true, topValidator=null, partLossPlc=IGNORE,
qryParallelism=1, evtsDisabled=false, encryptionEnabled=false]


Thanks,








--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Slow cache updates with indexing module enabled

2020-02-12 Thread xero
Hi,
We are experiencing slow updates to a cache with multiple indexed fields
(around 25 indexes during testing but we expect to have many more) for
updates that are only changing one field. Basically, we have a
customer*->belongsto->*segment relationship and we have one column per
segment. Only one column is updated with a 1 or 0 if the customer belongs to
the segment. 

During testing, we tried dropping half of the unrelated indexes (indexes
over fields that are not being updated) and we duplicate the performance. We
went from 1k ops to 2k ops approximately.

We found these cases may be related:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-19%3A+SQL+index+update+optimizations
https://issues.apache.org/jira/browse/IGNITE-7015?src=confmacro

Could you please confirm us if IGNITE-7015 could be related to this
scenario? If yes, do you have any plans to continue the development of the
fix?


We are using Ignite 2.7.6 with 10 nodes, 2 backups, indexing module enabled
and persistence.

Cache Configuration: [name=xdp-contactcomcast-1, grpName=null,
memPlcName=xdp, storeConcurrentLoadAllThreshold=5, rebalancePoolSize=2,
rebalanceTimeout=1, evictPlc=null, evictPlcFactory=null,
onheapCache=false, sqlOnheapCache=false, sqlOnheapCacheMaxSize=0,
evictFilter=null, eagerTtl=true, dfltLockTimeout=0, nearCfg=null,
writeSync=PRIMARY_SYNC, storeFactory=null, storeKeepBinary=false,
loadPrevVal=false, aff=RendezvousAffinityFunction [parts=1024, mask=1023,
exclNeighbors=false, exclNeighborsWarn=false, backupFilter=null,
affinityBackupFilter=null], cacheMode=PARTITIONED, atomicityMode=ATOMIC,
backups=2, invalidate=false, tmLookupClsName=null, rebalanceMode=ASYNC,
rebalanceOrder=0, rebalanceBatchSize=524288, rebalanceBatchesPrefetchCnt=2,
maxConcurrentAsyncOps=500, sqlIdxMaxInlineSize=-1, writeBehindEnabled=false,
writeBehindFlushSize=10240, writeBehindFlushFreq=5000,
writeBehindFlushThreadCnt=1, writeBehindBatchSize=512,
writeBehindCoalescing=true, maxQryIterCnt=1024,
affMapper=org.apache.ignite.internal.processors.cache.CacheDefaultBinaryAffinityKeyMapper@db5e319,
rebalanceDelay=0, rebalanceThrottle=0, interceptor=null,
longQryWarnTimeout=3000, qryDetailMetricsSz=0, readFromBackup=true,
nodeFilter=IgniteAllNodesPredicate [], sqlSchema=XDP_CONTACTCOMCAST_1,
sqlEscapeAll=false, cpOnRead=true, topValidator=null, partLossPlc=IGNORE,
qryParallelism=1, evtsDisabled=false, encryptionEnabled=false]


Thanks,








--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Scheduling Cache Refresh using Ignite

2020-02-12 Thread nithin91
Hi 

We are doing a  a POC on exploring the Ignite in memory capabilities and
building a rest api on 
top of it using node express.


Currently as a part of POC, installed Ignite in UNIX and trying to load the
data from Oracle DB 
to Ignite Cache using Cache JDBC Pojo Store.

Can someone help me whether the following scenarios can be handled using
Ignite as i couldn't find this in the official documentation.

1. If we want to add/drop/modify a  column to the cache, can we 
update the
bean file directly 
   when the node is running or do we need to stop the node and 
then again
restart.
   It would be really helpful if you can  share sample code or
documentation link.
   
2. How to refresh the ignite cache automatically or schedule 
the cache
refresh.
   It would be really helpful if you can  share sample code or
documentation link.

3. Is incremental refresh allowed? It would be really helpful 
if you can 
share sample code or 
   documentation link.
   

4. Is there any other way to load the caches fast other Cache 
JDBC POJO
Store.
   It would be really helpful if you can  share sample code or
documentation link.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Using putAll(TreeMap) with BinaryObjects

2020-02-12 Thread Pavel Tupitsyn
The reason for any deadlock (local or distributed) is taking locks on same
objects in different order, e. g:
Thread 1 locks A, B
Thread 2 locks B, A
  => thread 1 holds A and waits on B, thread 2 holds B and waits on A

When doing putAll, Ignite iterates over the provided map and locks keys in
the iterator order.
TreeMap is sorted, so it solves the problem automatically. Locks are taken
in the same order on all nodes/threads, preventing deadlocks.
HashMap can reorder elements arbitrarily, on the other hand, causing random
deadlocks.


For withKeepBinary cache you have two options:
- LinkedHashMap preserves insertion order. Make sure you always populate
the map in the same order.
- TreeMap with Comparator. Implement Comparator that sorts your keys the
right way, probably based on a particular field of the BinaryObject



On Wed, Feb 12, 2020 at 12:12 AM Григорий Доможиров <
grigorydomozhi...@gmail.com> wrote:

> Hello.
> As I know it's recommended to use putAll with TreeMap to avoid deadlocks
> (is there any other reasons?). If so, how to deal with .withKeepBinary
> cache, where you have to provide Map of BinaryObjects, which are not
> Comparable, and thus couldn't be put in TreeMap?
>
> Also, why deadlocks could happen with HashMap, and why using TreeMap
> prevents it?
>


Ignite Cluster and Kubernetes Cluster

2020-02-12 Thread narges saleh
Hi All,

Is it possible to have multiple ignite clusters within a single K8 cluster
(say AKS cluster)?

thanks.


JDBC thin client incorrect security context

2020-02-12 Thread VeenaMithare
Hi , 

We have built a security and audit plugin for security of our ignite
cluster. We are unable to get the right audit information i.e. we are unable
to get the right subject for users logged in through dbeaver ( jdbc thin
client. ). This is because the subjectid associated with the "CACHE_PUT"
event when an update is triggered by the jdbc thin client, contains the uuid
of the node that executed the update rather than the logged in jdbc thin
client user. 

If this is a limitation with the current version of ignite, is there any
workaround to get this information ?

regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Need help for Access Violation

2020-02-12 Thread Igor Sapego
No, they are not.

Honestly, I'm running out of ideas as I can not reproduce the issue
and thus can not debug it. It actually looks like some kind of memory
corruption. Are you sure that the issue can be reproduced with the
code snippet you've provided?

Best Regards,
Igor


On Tue, Feb 11, 2020 at 12:40 AM Anthony  wrote:

> Yes. I was using MSVC for both of them.
>
> BTW, are the project odbc and thin-client in the ignite needed? I did not
> build them because of some compiling issues.
>
> On Mon, Feb 10, 2020 at 5:54 AM Igor Sapego  wrote:
>
>> The issue looks very weird to me. Have you compiled the Ignite
>> libs using the same compiler as you use in your project?
>>
>> Best Regards,
>> Igor
>>
>>
>> On Fri, Feb 7, 2020 at 7:39 PM Anthony  wrote:
>>
>>> Hello,
>>> The "usrCp" value is
>>> NameValueType
>>> ▶ usrCp  const std::string &
>>> Seems that the "cfg.jvmClassPath" was not set properly?
>>>
>>> I am not familiar with java environment, should i set some environmental
>>> variable?
>>>
>>> Thank you!
>>>
>>> Anthony
>>>
>>> On Fri, Feb 7, 2020 at 5:20 AM Igor Sapego  wrote:
>>>
 Hi

 And what is the value of "usrCp" argument?
 For me the code works just fine.

 Best Regards,
 Igor


 On Fri, Feb 7, 2020 at 12:22 AM Anthony  wrote:

> Hello,
>
> I am new to ignite C++. I am using windows 10, VS community.
>
> I keep getting Access Violation when I am trying to run the following
> code.
>
> #include
> #include
> using namespace ignite;
>
> int main() {
> IgniteConfiguration cfg;
> Ignite node = Ignition::Start(cfg);
> std::cout << "node started" << std::endl;
> return 0;
> }
>
> Can anyone help with that?
>
> The error was generated from:
>
> std::string NormalizeClasspath(const std::string& usrCp)
> {
> if (usrCp.empty() || *usrCp.rbegin() == ';')
> return usrCp;
>
> return usrCp + ';';
> }
> Following are the error messages:
>
> Exception thrown at 0x7FFBADD635D6 (ignite.jni.dll) in
> Project1.exe: 0xC005: Access violation reading location
> 0x. occurred
>
> Thank you !!
>



Re: Offheap memory consumption + enabled persistence

2020-02-12 Thread mikle-a
First, thanks a lot for your reply!

But I am still confused. Did I understood properly that each node should
have enough memory to store full data set?

Previously I thought that the main idea of partitioning is to distribute
data among servers. For example, to distribute 300GB of data among 3 servers
having 100GB each. Now it turns that each server should have 300GB, or I've
understood it wrong?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Nodes started on local machine require more than 80% of physical RAM

2020-02-12 Thread Stéphane Thibaud
Hello!

I am deploying my first Ignite node, but I encountered the following message
on start:

[07:42:50] Nodes started on local machine require more than 80% of physical
RAM what can lead to significant slowdown due to swapping (please decrease
JVM heap size, data region size or checkpoint buffer size) [required=950MB,
available=1692MB]

I did apply the followings tweaks to the Java VM -Xms512m and Xmx512m and it
had an effect in that the 'required' part in the message above went down.
However, required/available < 0.8 now and I don't know why the message still
appears. Any ideas?


Kind regards,

Stéphane



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Dynamic Cache Change not allowed

2020-02-12 Thread nithin91
Forgot to attach the bean file.Attached the bean File now.  config.txt
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Dynamic Cache Change not allowed

2020-02-12 Thread nithin91
Hi 

We are doing a  a POC on exploring the Ignite in memory capabilities and
building a rest api on 
top of it using node express.


Currently as a part of POC, installed Ignite in UNIX and trying to load 
the
data from Oracle DB 
to Ignite Cache using Cache JDBC Pojo Store.

As a part of this process, bean file is custom configured  to start 
ignite
node in 
unix. Attached the bean file.
This bean file consists of both cache configuration details and ignite
configuration details.

Once the node is running, we are trying to do the following

1. Connecting to the ignite node running on unix by creating a replica 
of
the attached bean file from
   local system and adding an additional property in Bean file with
Client Mode = true and
   then loading the cache that are defined in the bean file deployed in
unix using the 
   following method from local system using JAVA

ignite.cache("CacheName").loadCache(null);

We are able to do this successfully.

2.  Connecting to the ignite node running on unix by creating a replica 
of
the attached bean file 
in local system and adding an additional property in Bean file 
with Client
Mode = true
and then trying to create a cache and configure the cache and 
then finally
loading
the cache using the attached JAVA Code.


When we are trying this approach, we are getting an error like 
dynamic
cache change
is not allowed.

It would be really helpful if any one can help me in resolve 
this issue.

If this not the right approach, then
Configuring all the caches in the bean file is the only 
available
option?If this is case, 
What should be the approach for  building some additional 
caches in ignite
and load these
Caches using Cache JDBC POJO Store when the node is running.

Also can you please help me on the ways  to handle these issues.

1. If we want to add/drop/modify a  column to the cache, can we 
update the
bean file directly 
   when the node is running or do we need to stop the node and 
then again
restart.
   It would be really helpful if you can  share sample code or
documentation link.
   
2. How to refresh the ignite cache automatically or schedule 
the cache
refresh.
   It would be really helpful if you can  share sample code or
documentation link.

3. Is incremental refresh allowed? It would be really helpful 
if you can 
share sample code or 
   documentation link.
   

4. Is there any other way to load the caches fast other Cache 
JDBC POJO
Store.
   It would be really helpful if you can  share sample code or
documentation link.

 JavaCode.txt
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/