Re: Short lived objects data region recommendation

2018-12-17 Thread Isaeed Mohanna
Hi
1. when u say that I should have a constant amount allocated for heap do u
mean initially allocated min:4GB,  Max 4GB or having a settings of min:1GB
, Max: 4GB is allright as well?
2. If i do not configure any on-heap caches does ignite automatically
allocate on-heap caches frequent requests or something?
3. Even though it sounds obvious just to verify, when using ignite compute
grid data allocated locally (inside the task) during execution of the tasks
are still on heap? correct?
Thanks


On Tue, Dec 11, 2018 at 2:50 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> All cache values are now always stored off-heap, on-heap is only used as
> cache (on top of cache).
>
> 1. You should check if performance is good enough without on-heap caching.
> 2. You should have some constant amount of RAM for heap (4G is allright,
> but it depends) and dedicate the rest of memory to off-heap (by configuring
> default DataRegion). This unless you have a lot of on-heap caching.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вс, 9 дек. 2018 г. в 14:32, Isaeed Mohanna :
>
>> Hi
>> I have moved from Ignite 1.9.0 to Ignite 2.6.0.
>> Part of my implementation I generate many objects and store them in a
>> cache
>> and later a task process them, these objects are usually destroyed after
>> processing they live for ~1 minutes and i have around 2K of them each
>> cycle
>> (1min) , but very few of them live for weeks.
>> In Ignite 1.9.0 these were stored in the Java Heap and in the new Version
>> 2.6.0 be default they are stored offheap.
>> 1. Based on the scenario i described should my cache be stored off-heap?
>> or
>> should it be stored on-heap for faster creation\deletion?
>> 2. If i should store off-heap how much memory (%) should i keep for the
>> Java
>> heap?
>> Thanks in advance
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Short lived objects data region recommendation

2018-12-09 Thread Isaeed Mohanna
Hi
I have moved from Ignite 1.9.0 to Ignite 2.6.0.
Part of my implementation I generate many objects and store them in a cache
and later a task process them, these objects are usually destroyed after
processing they live for ~1 minutes and i have around 2K of them each cycle
(1min) , but very few of them live for weeks.
In Ignite 1.9.0 these were stored in the Java Heap and in the new Version
2.6.0 be default they are stored offheap.
1. Based on the scenario i described should my cache be stored off-heap? or
should it be stored on-heap for faster creation\deletion?
2. If i should store off-heap how much memory (%) should i keep for the Java
heap?
Thanks in advance



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteThread ThreadLocal Memory Leak

2017-03-15 Thread Isaeed Mohanna
Hi
As i have mentioned before its not me that is using the ThreadLocal but a
3rd party library that i am using is doing so. I am trying to get them to
fix it but until they do so i wanted to verify if I can remedy the
situation without having to wait for them.
ill push them to change their implementation.
Thanks alot

On Tue, Mar 14, 2017 at 8:59 PM, vkulichenko 
wrote:

> Isaeed,
>
> There is no such way.
>
> If you're using thread local, you should properly clean it when the value
> is
> not needed anymore. Another way it to use something else instead.
>
> Why are you using thread locals in compute in the first place? What is the
> use case for this?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/IgniteThread-ThreadLocal-
> Memory-Leak-tp11168p11171.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


IgniteThread ThreadLocal Memory Leak

2017-03-14 Thread Isaeed Mohanna
Hi
I have an ignite 1.7.0 cluster with 2 nodes performing multiple operation,
one of my main scenario's is the execution of small tasks that generate
reports using an external library. After starting the cluster i can see the
Java heap usage growing without being released by gc, I created a heap dump
and investigated the retained size of the different objects, i can see an
accumulations of object in:
org.apache.ignite.thread.IgniteThread#26
java.lang.ThreadLocal$ThreadLocalMap#62

It appears to be that my external library is using the ThreadLocal storage
to store some required data for the generation and since the Ignite thread
is reused in ignite these calculation are accumulating with every execution
until the system runs out of memory.
Is there a way that i could force the clean up of ignite thread local
storage at the end of the execution of any task regardless of the task?
I am currently using Ignite Compute to execute my task which implements
ignite callable.
Thanks for the help




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/IgniteThread-ThreadLocal-Memory-Leak-tp11168.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Upgrade to 1.8.0 problems

2017-03-07 Thread Isaeed Mohanna
Hi
I have a 1.7.0 cluster with two nodes with several caches in it, i tried
moving to Ignite 1.8.0 unfortunately i am facing two issues:
1. My query's stopped working against the cache. A simple SQL Query like
this
SqlQuery sql = new SqlQuery(Entity.class, "type
= ?");
type is part of the entity class its a simply enum, any idea why my query
now returns zero results?

2. In my efforts to resolve problem 1, i thought to start using binary
marshaller in my entities which implemented Externalizable interface, i
removed the interface to use the binary marshaller which actually  helped
and my query is working again however whenever i have two nodes in the
cluster i receive the following exception several times when I join the
second node to the cluster but it appears to go away afterwards. what is
causing this exception? and how do i resolve the problem?
Thanks

[2017-03-08 05:58:19] [ERROR]
[org.apache.ignite.internal.processors.task.GridTaskWorker:org.apache.ignite.logger.slf4j.Slf4jLogger.error(Slf4jLogger.java:112)]:
Failed to obtain remote job result policy for result from
ComputeTask.result(..) method (will fail the whole task): GridJobResultImpl
[job=C2V2 [c=com.hhh.Task@100853d8], sib=GridJobSiblingImpl
[sesId=a454e7caa51-9feeb429-75c5-4920-81a1-e95fdb12ebd5,
jobId=b454e7caa51-9feeb429-75c5-4920-81a1-e95fdb12ebd5,
nodeId=a02307c0-64d8-443f-83e5-2dbdca9ab259, isJobDone=false],
jobCtx=GridJobContextImpl
[jobId=b454e7caa51-9feeb429-75c5-4920-81a1-e95fdb12ebd5, timeoutObj=null,
attrs={}], node=TcpDiscoveryNode [id=a02307c0-64d8-443f-83e5-2dbdca9ab259,
addrs=[20.0.2.55], sockAddrs=[/20.0.2.55:47500], discPort=47500, order=2,
intOrder=2, lastExchangeTime=1488952697596, loc=false,
ver=1.8.0#20161205-sha1:9ca40dbe, isClient=false], ex=class
o.a.i.IgniteException: null, hasRes=true, isCancelled=false,
isOccupied=true]
class org.apache.ignite.IgniteException: Remote job threw user exception
(override or implement ComputeTask.result(..) method if you would like to
have automatic failover for this exception).
at
org.apache.ignite.compute.ComputeTaskAdapter.result(ComputeTaskAdapter.java:101)
at
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1030)
at
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1023)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6596)
at
org.apache.ignite.internal.processors.task.GridTaskWorker.result(GridTaskWorker.java:1023)
at
org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:841)
at
org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:996)
at
org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1221)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1082)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:710)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:102)
at
org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:673)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: null
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2V2.execute(GridClosureProcessor.java:2040)
at
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:556)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6564)
at
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:550)
at
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:479)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1180)
at
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1894)
... 7 more
Caused by: java.lang.NullPointerException
at
org.apache.ignite.internal.processors.service.GridServiceProcessor.serviceTopology(GridServiceProcessor.java:700)
at
org.apache.ignite.internal.processors.service.GridServiceProxy.randomNodeForService(GridServiceProxy.java:249)
at
org.apache.ignite.internal.processors.service.GridServiceProxy.nodeForService(GridServiceProxy.java:226)
at
org.apache.ignite.internal.processors.service.GridServiceProxy.invokeMethod(GridS

Re: Cache Memory Behavior \ GridDhtLocalPartition

2016-11-28 Thread Isaeed Mohanna
Hi
No, currently I add items much slower than add them, there is no
accumulation of items in the queue, i clear the queue once in 30 seconds
removing all items from the queue, the data in the queue is different from
the data in the EventsCache. can entities from events cache end up in this
internal datastarcutures_0 cache? since in the memory dump I could see
entities from EventsCache and not the queue?
I am seeing the following log , even though i doubt it, could it be somehow
connected, i do implement externalizable interface, should i switch to
binary marshaller? is it relevant at all to the memory issue?
cannot be serialized using BinaryMarshaller because it either implements
Externalizable interface or have writeObject/readObject methods
Regards

On Thu, Nov 24, 2016 at 1:04 PM, dkarachentsev 
wrote:

> Hi,
>
> Yes, you're right, this is a cache that holds data for distributed
> datastructures, and IgniteQueue as well. Is it possible that you add items
> to queue faster than poll?
>
> TxCommittedVersionsSize and TxRolledbackVersionsSize are common for all
> caches and just store versions of finished transactions, so if you have
> transactional cache you'll find that values greater than zero if there was
> made transactions. Don't worry about size, it's number finite (default is
> 262144) and takes small amount of memory.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Cache-Memory-Behavior-GridDhtLocalPartition-
> tp8835p9185.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Cache Memory Behavior \ GridDhtLocalPartition

2016-11-23 Thread Isaeed Mohanna
Hi Again,
In addition to the datastructures_0 that is growing which is not a cache
that i have created.

Through JMX to my EventsCache mentioned above where retained memory reside
i can see using the CacheLocalMetricsMXBeanImpl the following attributes:
KeySize 403
Size 403
TxCommittedVersionsSize 129819
TxRolledbackVersionsSize 129819


They keysize and size attributes seem to be changing correctly up and down
according to the generated and processed events, however
the TxCommittedVersionsSize & TxRolledbackVersionsSize seem to be accumulating.
could you please elaborate on what are these attributes from the name i
assume they are related to transactions but my cache is atomic (not
transactional) is it possible that removed entries are remembered for some
reason in some transaction log?

regards,
Isaeed


On Wed, Nov 23, 2016 at 3:00 PM, Isaeed Mohanna  wrote:

> Hi
> i did call the name method and i got "datastructures_0" which is not a
> cache that i have created.
> the size count now is at 13.5k, I am using an ignite queue, is it possible
> that this is created internally by ignite for the queue or some internal
> ignite cache?
> Thanks
>
> On Wed, Nov 23, 2016 at 2:40 PM, dkarachentsev  > wrote:
>
>> Hi,
>>
>> To get to know what cache represents CacheMetricsMXBean you can call
>> name()
>> method of it. And yes, it's worth to check if that entries cause your
>> problem.
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Cache-Memory-Behavior-GridDhtLocalPartition
>> -tp8835p9149.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: Cache Memory Behavior \ GridDhtLocalPartition

2016-11-23 Thread Isaeed Mohanna
Hi
i did call the name method and i got "datastructures_0" which is not a
cache that i have created.
the size count now is at 13.5k, I am using an ignite queue, is it possible
that this is created internally by ignite for the queue or some internal
ignite cache?
Thanks

On Wed, Nov 23, 2016 at 2:40 PM, dkarachentsev 
wrote:

> Hi,
>
> To get to know what cache represents CacheMetricsMXBean you can call name()
> method of it. And yes, it's worth to check if that entries cause your
> problem.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Cache-Memory-Behavior-GridDhtLocalPartition-
> tp8835p9149.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Cache Memory Behavior \ GridDhtLocalPartition

2016-11-23 Thread Isaeed Mohanna
Hi
i have added JMX to my application to monitored the primary and backup
cache size. it will take a couple of days untill the issue occurs.
While scanning through JMX of ignite there is an entry "datastructures_0"
under "My_Cluster" group, more specifically beans:
org.apache:clsLdr=764c12b6,grid=My_Cluster,group=datastructures_0,name=org.apache.ignite.internal.processors.cache.CacheClusterMetricsMXBeanImpl
org.apache:clsLdr=764c12b6,grid=My_Cluster,group=datastructures_0,name=org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl

I have noticed that the KeySize attribute of these beans were 80k+ before i
stopped the node and returned to 0  at restart but since restart its been
increasing of 1 per second or two seconds, without decreasing, ill keep an
eye on it but is it possible this is related to what i am experiencing?
what caches does this bean represent?

Thanks




On Tue, Nov 22, 2016 at 11:24 AM, dkarachentsev 
wrote:

> Hi!
>
> Have you checked that without backups you have no OOME? Can you be sure
> that
> all events are removed?
> Please verify that number of entries doesn't grow constantly during
> application run, for that you may use IgniteCache.size() and
> IgniteCache.sizeLong() with PRIMARY and/or BACKUP peek modes.
>
> Thanks!
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Cache-Memory-Behavior-GridDhtLocalPartition-
> tp8835p9127.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Cache Memory Behavior \ GridDhtLocalPartition

2016-11-14 Thread Isaeed Mohanna
Hi
My Cache key class is java.util.UUID
I am not using any collocation affinity, could you please elaborate how can
i use constant affinity to check if cache entry still exists on backup?

Thanks

On Mon, Nov 14, 2016 at 4:50 PM, Andrey Mashenkov 
wrote:

> Hi,
>
> On remove operation entry should be removed from primary node and backup
> nodes as well.
> Can you reproduce the issue? Can you check if entry was removed only from
> primary node and exists on backup, e.g. using constant affinity function?
>
> I think its possible that the backup is not being cleaned, due to key
> serialization issues. Would you provide key class implementation?
>
> On Sun, Nov 13, 2016 at 4:14 PM, Isaeed Mohanna  wrote:
>
>> Hi
>> There is no eviction policy since entries in the caches are removed by my
>> application. (calling IgniteCache.remove ).
>>
>> Digging through the core dump I can see that most resident items are
>> cache entries were the cachecontext.cachename points to EventsCache, as i
>> have mentioned before this cache has very frequent writes and deletions of
>> events (i am using remove to delete the events), however this cache is also
>> atomic,partitioned and have a backup of at least one so in case a node
>> fails the event is not lost. when calling remove on a cache, is the backup
>> of an entry removed as well? is it possible that the backup is not being
>> cleaned?
>>
>> Currently i am using the default garbage collector settings, i can't see
>> any spikes in performance due to GC, since i experience memory outage in
>> several days i am not sure i am collecting data more than the GC is able to
>> claim, I will try manually performing a GC when the system is about to
>> crash to see wither forcing a GC will clean the memory.
>>
>> Thank you for ur help
>>
>> On Fri, Nov 11, 2016 at 11:59 AM, Andrey Mashenkov <
>> amashen...@gridgain.com> wrote:
>>
>>> Hi Isaeed Mohanna,
>>>
>>> I don't see any eviction or expired policy configured. Is entry deletion
>>> performed by you application?
>>>
>>> Have you try to detect which of caches id grows unexpectedly?
>>> Have you analyse GC logs or tried to tune GC? Actually, you can putting
>>> data faster as garbage is collecting. This page may be helpful
>>> http://apacheignite.gridgain.org/v1.7/docs/performan
>>> ce-tips#tune-garbage-collection.
>>>
>>> Also you can get profile (with e.g. JavaFlightRecorder) of grid under
>>> load to understand what is really going on.
>>>
>>> Please let me know, if there are any issues.
>>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 10:10 AM, Isaeed Mohanna 
>>> wrote:
>>>
>>>> Hi
>>>> My cache configurations appear below.
>>>>
>>>> // Cache 1 - a cache of ~15 entities that has a date stamp that is
>>>> updated every 30 - 120 seconds
>>>> CacheConfiguration Cache1Cfg = new CacheConfiguration<>();
>>>> Cache1Cfg cheCfg.setName("Cache1Name");
>>>> Cache1Cfg .setCacheMode(CacheMode.REPLICATED);
>>>> Cache1Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>>> Cache1Cfg .setStartSize(50);
>>>>
>>>> // Cache 2 - A cache used as an ignite queue with frequent inserts and
>>>> removal from the queue
>>>> CacheConfiguration Cache2Cfg = new CacheConfiguration<>();
>>>> Cache2Cfg .setName("Cache2Name");
>>>> Cache2Cfg .setCacheMode(CacheMode.REPLICATED);
>>>> Cache2Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>>>
>>>> // Cache 3 - hundreds of entities updated daily
>>>> CacheConfiguration Cache3Cfg = new CacheConfiguration<>();
>>>> Cache3Cfg .setName("Cache3Name");
>>>> Cache3Cfg .setCacheMode(CacheMode.REPLICATED);
>>>> Cache3Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>>> Cache3Cfg .setIndexedTypes(UUID.class, SomeClass.class);
>>>>
>>>> // Cache 4 - Cache with very few writes and reads
>>>> CacheConfiguration Cache4Cfg = new CacheConfiguration<>();
>>>> Cache4Cfg .setName("Cache4Name");
>>>> Cache4Cfg .setCacheMode(CacheMode.REPLICATED);
>>>> Cache4Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>>>
>>>> // Events Cache - cache with very frequent writes and delete, acts as
>>>> events queue
>>>> CacheConfiguration eventsCacheConfig= new CacheConfiguration<>();
>>>&

Re: Cache Memory Behavior \ GridDhtLocalPartition

2016-11-13 Thread Isaeed Mohanna
Hi
There is no eviction policy since entries in the caches are removed by my
application. (calling IgniteCache.remove ).

Digging through the core dump I can see that most resident items are cache
entries were the cachecontext.cachename points to EventsCache, as i have
mentioned before this cache has very frequent writes and deletions of
events (i am using remove to delete the events), however this cache is also
atomic,partitioned and have a backup of at least one so in case a node
fails the event is not lost. when calling remove on a cache, is the backup
of an entry removed as well? is it possible that the backup is not being
cleaned?

Currently i am using the default garbage collector settings, i can't see
any spikes in performance due to GC, since i experience memory outage in
several days i am not sure i am collecting data more than the GC is able to
claim, I will try manually performing a GC when the system is about to
crash to see wither forcing a GC will clean the memory.

Thank you for ur help

On Fri, Nov 11, 2016 at 11:59 AM, Andrey Mashenkov 
wrote:

> Hi Isaeed Mohanna,
>
> I don't see any eviction or expired policy configured. Is entry deletion
> performed by you application?
>
> Have you try to detect which of caches id grows unexpectedly?
> Have you analyse GC logs or tried to tune GC? Actually, you can putting
> data faster as garbage is collecting. This page may be helpful
> http://apacheignite.gridgain.org/v1.7/docs/performance-tips#tune-garbage-
> collection.
>
> Also you can get profile (with e.g. JavaFlightRecorder) of grid under load
> to understand what is really going on.
>
> Please let me know, if there are any issues.
>
>
>
> On Thu, Nov 10, 2016 at 10:10 AM, Isaeed Mohanna 
> wrote:
>
>> Hi
>> My cache configurations appear below.
>>
>> // Cache 1 - a cache of ~15 entities that has a date stamp that is
>> updated every 30 - 120 seconds
>> CacheConfiguration Cache1Cfg = new CacheConfiguration<>();
>> Cache1Cfg cheCfg.setName("Cache1Name");
>> Cache1Cfg .setCacheMode(CacheMode.REPLICATED);
>> Cache1Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>> Cache1Cfg .setStartSize(50);
>>
>> // Cache 2 - A cache used as an ignite queue with frequent inserts and
>> removal from the queue
>> CacheConfiguration Cache2Cfg = new CacheConfiguration<>();
>> Cache2Cfg .setName("Cache2Name");
>> Cache2Cfg .setCacheMode(CacheMode.REPLICATED);
>> Cache2Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>
>> // Cache 3 - hundreds of entities updated daily
>> CacheConfiguration Cache3Cfg = new CacheConfiguration<>();
>> Cache3Cfg .setName("Cache3Name");
>> Cache3Cfg .setCacheMode(CacheMode.REPLICATED);
>> Cache3Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>> Cache3Cfg .setIndexedTypes(UUID.class, SomeClass.class);
>>
>> // Cache 4 - Cache with very few writes and reads
>> CacheConfiguration Cache4Cfg = new CacheConfiguration<>();
>> Cache4Cfg .setName("Cache4Name");
>> Cache4Cfg .setCacheMode(CacheMode.REPLICATED);
>> Cache4Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>
>> // Events Cache - cache with very frequent writes and delete, acts as
>> events queue
>> CacheConfiguration eventsCacheConfig= new CacheConfiguration<>();
>> eventsCacheConfig.setName("EventsCache");
>> eventsCacheConfig.setCacheMode(CacheMode.PARTITIONED);
>> eventsCacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>> eventsCacheConfig.setIndexedTypes(UUID.class, SomeClass.class);
>> eventsCacheConfig.setBackups(1);
>> eventsCacheConfig.setOffHeapMaxMemory(0);
>>
>> // Failed Events Cache - cache with less writes and reads stores failed
>> events
>> CacheConfiguration failedEventsCacheConfig = new
>> CacheConfiguration<>();
>> failedEventsCacheConfig.setName("FailedEventsCache");
>> failedEventsCacheConfig.setCacheMode(CacheMode.PARTITIONED);
>> failedEventsCacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>> failedEventsCacheConfig.setIndexedTypes(UUID.class, EventEntity.class);
>> failedEventsCacheConfig.setBackups(1);
>> failedEventsCacheConfig.setOffHeapMaxMemory(0);
>>
>> // In addition i have one atomic reference
>> AtomicConfiguration atomicCfg = new AtomicConfiguration();
>> atomicCfg.setCacheMode(CacheMode.REPLICATED);
>> Thanks again
>>
>> On Wed, Nov 9, 2016 at 5:26 PM, Andrey Mashenkov > > wrote:
>>
>>> Hi Isaeed Mohanna,
>>>
>>> Would you please provide your cache configurations?
>>>
>>>
>>> On Wed, Nov 9, 2016 at 5:37 PM

Re: Cache Memory Behavior \ GridDhtLocalPartition

2016-11-09 Thread Isaeed Mohanna
Hi
My cache configurations appear below.

// Cache 1 - a cache of ~15 entities that has a date stamp that is updated
every 30 - 120 seconds
CacheConfiguration Cache1Cfg = new CacheConfiguration<>();
Cache1Cfg cheCfg.setName("Cache1Name");
Cache1Cfg .setCacheMode(CacheMode.REPLICATED);
Cache1Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
Cache1Cfg .setStartSize(50);

// Cache 2 - A cache used as an ignite queue with frequent inserts and
removal from the queue
CacheConfiguration Cache2Cfg = new CacheConfiguration<>();
Cache2Cfg .setName("Cache2Name");
Cache2Cfg .setCacheMode(CacheMode.REPLICATED);
Cache2Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);

// Cache 3 - hundreds of entities updated daily
CacheConfiguration Cache3Cfg = new CacheConfiguration<>();
Cache3Cfg .setName("Cache3Name");
Cache3Cfg .setCacheMode(CacheMode.REPLICATED);
Cache3Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
Cache3Cfg .setIndexedTypes(UUID.class, SomeClass.class);

// Cache 4 - Cache with very few writes and reads
CacheConfiguration Cache4Cfg = new CacheConfiguration<>();
Cache4Cfg .setName("Cache4Name");
Cache4Cfg .setCacheMode(CacheMode.REPLICATED);
Cache4Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);

// Events Cache - cache with very frequent writes and delete, acts as
events queue
CacheConfiguration eventsCacheConfig= new CacheConfiguration<>();
eventsCacheConfig.setName("EventsCache");
eventsCacheConfig.setCacheMode(CacheMode.PARTITIONED);
eventsCacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
eventsCacheConfig.setIndexedTypes(UUID.class, SomeClass.class);
eventsCacheConfig.setBackups(1);
eventsCacheConfig.setOffHeapMaxMemory(0);

// Failed Events Cache - cache with less writes and reads stores failed
events
CacheConfiguration failedEventsCacheConfig = new
CacheConfiguration<>();
failedEventsCacheConfig.setName("FailedEventsCache");
failedEventsCacheConfig.setCacheMode(CacheMode.PARTITIONED);
failedEventsCacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
failedEventsCacheConfig.setIndexedTypes(UUID.class, EventEntity.class);
failedEventsCacheConfig.setBackups(1);
failedEventsCacheConfig.setOffHeapMaxMemory(0);

// In addition i have one atomic reference
AtomicConfiguration atomicCfg = new AtomicConfiguration();
atomicCfg.setCacheMode(CacheMode.REPLICATED);
Thanks again

On Wed, Nov 9, 2016 at 5:26 PM, Andrey Mashenkov 
wrote:

> Hi Isaeed Mohanna,
>
> Would you please provide your cache configurations?
>
>
> On Wed, Nov 9, 2016 at 5:37 PM, Isaeed Mohanna  wrote:
>
>> Hi
>> i have an ignite 1.7.0 cluster with 3 nodes running , i have 3 PARTITIONED
>> ATOMIC CACHES and 2 REPLICATED ATOMIC CACHES, Most of these caches are
>> populated with events data, so each cache entry is short lived its
>> inserted,
>> processed later by some task and removed. so the caches are pretty much
>> very
>> dynamic.
>> Recently the load in our system has increased (more events were received
>> and
>> generated) and we started experiencing out of memory fails once in while
>> (several days depending on machine size).
>> I have created several heap dumps and noticed the largest retained objects
>> in memory is by the following classes: GridDhtLocalPartition,
>> ConccurentHashMap8,ConccurentHashMap8$Node[].
>> I can see the GridDhtLocalPartition has a ConccurentHashMap8 so most
>> likely
>> all three reference the same thing.
>> My question what is this class and why does it retain memory, entities in
>> my
>> caches are usually short lived (several minutes in most caches) so i would
>> expect the memory to be released? any hints on how to continue my
>> investigation would be great.
>> Thanks
>>
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Cache-Memory-Behavior-GridDhtLocalPartition-tp8835.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Cache Memory Behavior \ GridDhtLocalPartition

2016-11-09 Thread Isaeed Mohanna
Hi
i have an ignite 1.7.0 cluster with 3 nodes running , i have 3 PARTITIONED
ATOMIC CACHES and 2 REPLICATED ATOMIC CACHES, Most of these caches are
populated with events data, so each cache entry is short lived its inserted,
processed later by some task and removed. so the caches are pretty much very
dynamic.
Recently the load in our system has increased (more events were received and
generated) and we started experiencing out of memory fails once in while
(several days depending on machine size).
I have created several heap dumps and noticed the largest retained objects
in memory is by the following classes: GridDhtLocalPartition,
ConccurentHashMap8,ConccurentHashMap8$Node[].
I can see the GridDhtLocalPartition has a ConccurentHashMap8 so most likely
all three reference the same thing.
My question what is this class and why does it retain memory, entities in my
caches are usually short lived (several minutes in most caches) so i would
expect the memory to be released? any hints on how to continue my
investigation would be great.
Thanks





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-Memory-Behavior-GridDhtLocalPartition-tp8835.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Grace full node shutdown

2016-08-23 Thread Isaeed Mohanna
Thank You

On Tue, Aug 23, 2016 at 9:14 PM, vkulichenko 
wrote:

> Ignition.stop() stops only one node, all others will continue running.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-Grace-full-node-shutdown-tp7207p7250.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Grace full node shutdown

2016-08-23 Thread Isaeed Mohanna
according to documentation ignition.stop(false) will stop the whole grid,
while in my case i would like to stop this specific node only while the
other two nodes should continue working? could you please confirm that
ignition.stop will stop only current note and not the whole grid?

On Tue, Aug 23, 2016 at 1:10 PM, Vladislav Pyatkov 
wrote:

> Hello,
>
> You can use Ignition.stop(false), to wait for completion active tasks on a
> node.
>
> Also, you can use checkpoint[1] for save task state on one node and
> continue on other node.
>
> [1]: https://apacheignite.readme.io/docs/checkpointing
>
> On Mon, Aug 22, 2016 at 3:54 PM, Isaeed Mohanna  wrote:
>
>> Hi
>> I have ignite embedded in my own server, I have a 3 node cluster running
>> different tasks and services all the time.
>> for availability reasons i would like to perform a rolling upgrade of my
>> server (not ignite, ignite will still be in the same version), so i would
>> like to stop one node, put in a new version and start it again, doing so
>> for
>> each node in the cluster.
>> The only problem i have that when i stop a node it is possible that a task
>> is being executed on this node and therefore interrupted. I would like to
>> avoid this interruption.
>> Is it possible pragmatically using Ignite API to tell a specific Node to
>> stop accepting new tasks, continue executing current tasks and then
>> shutdown
>> when all tasks are completed?
>> I saw stop node which sends a kill command, therefore its identical to
>> stopping the application. and Ignite.close which seems to stop the whole
>> cluster, therefore both will interrupt running tasks.
>> Thanks in advance
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Ignite-Grace-full-node-shutdown-tp7207.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> Vladislav Pyatkov
>


Ignite Grace full node shutdown

2016-08-22 Thread Isaeed Mohanna
Hi
I have ignite embedded in my own server, I have a 3 node cluster running
different tasks and services all the time.
for availability reasons i would like to perform a rolling upgrade of my
server (not ignite, ignite will still be in the same version), so i would
like to stop one node, put in a new version and start it again, doing so for
each node in the cluster.
The only problem i have that when i stop a node it is possible that a task
is being executed on this node and therefore interrupted. I would like to
avoid this interruption.
Is it possible pragmatically using Ignite API to tell a specific Node to
stop accepting new tasks, continue executing current tasks and then shutdown
when all tasks are completed?
I saw stop node which sends a kill command, therefore its identical to
stopping the application. and Ignite.close which seems to stop the whole
cluster, therefore both will interrupt running tasks.
Thanks in advance



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Grace-full-node-shutdown-tp7207.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Two Ignite Clusters formed after network disturbance

2016-03-26 Thread Isaeed Mohanna
I see.
Thanks for the information.

On Fri, Mar 25, 2016 at 3:24 AM, vkulichenko 
wrote:

> This is not completely correct. Ignite also lacks the implementation of
> GridSegmentationProcessor that actually decides whether a segment is valid
> or not. The implementation of the processor can be injected into Ignite as
> a
> part of a plugin.
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Two-Ignite-Clusters-formed-after-network-disturbance-tp3377p3673.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Two Ignite Clusters formed after network disturbance

2016-03-24 Thread Isaeed Mohanna
Thank you very much for the information

On Thu, Mar 24, 2016 at 1:07 PM, Denis Magda  wrote:

> Hi,
>
> Correct, the implementations of SegmentationResolver I was referring to are
> available as a part of GridGain.
>
> However you're free to implement your own version of SegmentationResolver
> and pass it to IgniteConfiguration.setSegmentationResolver(...) upon node
> startup. Ignite will detect your resolver and will use it the same way as
> it
> uses other implementations.
>
> --
> Denis
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Two-Ignite-Clusters-formed-after-network-disturbance-tp3377p3664.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Two Ignite Clusters formed after network disturbance

2016-03-23 Thread Isaeed Mohanna
Thank you Denis for the response.
As i understand the segmentation resolver implementation is available only
in Gridgain.
My question is if Ignite supports the interface. i.e. can i implement the
interface and ignite will know what to do with it or the whole concept is
missing in ignite?
Thanks

On Mon, Mar 21, 2016 at 4:04 PM, Denis Magda  wrote:

> Hi,
>
> This looks like a classical "split brain" scenario.
>
> Ignite has built-in concept of a segmentation resolver [1] that allows work
> out situations like you have.
> But Ignite is lack of any implementation for this interface.
>
> However, GridGain, that is built on top of Ignite, provides several
> segmentation resolvers' implementations as a part of its enterprise
> features
> set [2]
>
> [1]
>
> https://ignite.apache.org/releases/1.5.0.final/javadoc/org/apache/ignite/plugin/segmentation/package-frame.html
> [2] https://gridgain.readme.io/docs/network-segmentation
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Two-Ignite-Clusters-formed-after-network-disturbance-tp3377p3605.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Two Ignite Clusters formed after network disturbance

2016-03-06 Thread Isaeed Mohanna
Hi
I am using ignite as a fault tolerant compute and cache engine, i have 4
nodes forming a cluster. one of the nodes contain singleton cluster services
that execute different jobs on the different cluster nodes.
once in a while a network disturbance occur, for example node 4 network
disconnected for a minute or so, after that i can see that the first 3 nodes
are still in the cluster. while the 4 nodes has created a cluster on its own
and started the singleton services once more. Running two clusters in
parallel will cause data corruption.
How can i avoid this situation and prevent and additional cluster to be
formed?
I considered to increase "FailureTimeoutDetection" value or "ReconnectCount"
but i am not sure its the best idea.
I will be happy to hear your opinion how to face this problem and avoid the
formation of two clusters?

Thanks in advance



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Two-Ignite-Clusters-formed-after-network-disturbance-tp3377.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Difference between official ignite repository and gridgain repository

2015-09-03 Thread Isaeed Mohanna
Thanks for the info

On Thu, Sep 3, 2015 at 8:41 PM, vkulichenko [via Apache Ignite Users] <
ml-node+s70518n1270...@n6.nabble.com> wrote:

> Isaeed Mohanna wrote
> Hi
> I got confused because maven official repository lists these builds as
> alpha in the type column as seen here
> http://mvnrepository.com/artifact/org.apache.ignite/ignite-core
> It will help resolve the confusion if the build type is changed to stable
> instead of alpha...
> So if i understand correctly I should use the official maven repository
> unless there is a specific fix that i need then i should pull from grid
> gain repository.
> Thank you
>
> mvnrepository.com is not a repository, it's just a search engine
> (third-party, AFAIK). And it looks like this site treats '-incubating'
> qualifier as a sign of alpha release, which is actually not correct, so I
> would ignore this.
>
> 1.3.0 is the latest stable version of Ignite and it's absolutely OK to use
> it.
>
> -Val
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://apache-ignite-users.70518.x6.nabble.com/Difference-between-official-ignite-repository-and-gridgain-repository-tp1254p1270.html
> To unsubscribe from Difference between official ignite repository and
> gridgain repository, click here
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=1254&code=aXNhZWVkbUBnbWFpbC5jb218MTI1NHwxMjQzODc1NTMx>
> .
> NAML
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Difference-between-official-ignite-repository-and-gridgain-repository-tp1254p1271.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Difference between official ignite repository and gridgain repository

2015-09-02 Thread Isaeed Mohanna
Hi
I got confused because maven official repository lists these builds as
alpha in the type column as seen here
http://mvnrepository.com/artifact/org.apache.ignite/ignite-core
It will help resolve the confusion if the build type is changed to stable
instead of alpha...
So if i understand correctly I should use the official maven repository
unless there is a specific fix that i need then i should pull from grid
gain repository.
Thank you

On Wed, Sep 2, 2015 at 10:57 PM, vkulichenko [via Apache Ignite Users] <
ml-node+s70518n1261...@n6.nabble.com> wrote:

> Hi,
>
> All releases listed on Apache Ignite download page are stable, not alpha.
> Probably you were confused by '-incubating' suffix, but it's there only
> because project was in incubation process when these versions were
> released. We already graduated, so 1.4.0 will not have it. BTW, we expect
> it next week.
>
> GridGain community edition is a binary build created by GridGain which
> provides additional bug fixes and features that are not released as a part
> of Ignite yet. E.g., GridGain 1.3.3 is based on Ignite 1.3.0.
>
> Makes sense?
>
> -Val
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://apache-ignite-users.70518.x6.nabble.com/Difference-between-official-ignite-repository-and-gridgain-repository-tp1254p1261.html
> To unsubscribe from Difference between official ignite repository and
> gridgain repository, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Difference-between-official-ignite-repository-and-gridgain-repository-tp1254p1266.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Difference between official ignite repository and gridgain repository

2015-09-02 Thread Isaeed Mohanna
Hi
i am currently use Apache Ignite v 1.0.0 and i would like to upgrade to a
more recent version for some bugs and new functionality.
In the official ignite repository i can see only V1.0.0 as release and
versions 1.3 as alpha release.
However i see that is grid gain ignite repository there is v 1.3.3 as the
most recent version.
It would be great if i could get a clarification on the difference between
the two repositories and why for example V1.3.3 does not appear in official
repository? and if versions on official repository are really alpha builds
of that version?
Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Difference-between-official-ignite-repository-and-gridgain-repository-tp1254.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.