Hi,
Looks like server nodes are down. Please increase the data region size and
try. sample configuration is below.
#configured 8gb as data region, should be added in server configuration
Dear Team,
We have requirement to have multiple caches each node ( 4 nodes) which
will add dynamically . Initially it was working fine . After adding 1000 caches
on each node we are getting memory exception While adding cache to each node
and client node is getting terminated, even though
Hello,
That's what I was thinking of... thanks Ilya
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException:
> *Out
> of memory in data region* [name=default, initSize=1.0 GiB, maxSize=1.0 GiB,
> *persistenceEnabled=true*]
>
> the OOM is occurring in data region, and that's off-heap memory, as you
> pointed out.
>
> On the other hand, on-heap memory in th
Hello!
Ok but, according to the error:
[15:01:23,559][SEVERE][client-connector-#134][] JVM will be halted
immediately due to the failure: [failureCtx=FailureContext
[type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException: *Out
of memory in data region* [name=default, initSize=1.0
Hello!
Ignite uses off-heap memory, whether the persistence is on or not (it
depends on another parameter [1]), but only for a data storing.
Transaction processing is performing on heap, therefore an OOM may occur
here.
[1].
Hello
Thanks for your help.
I know that fixes the problem, but my question was about why I’m getting
that error when persistence is on.
As far as I know, if persistence is on, off-heap memory holds a subset of
data when they don’t fit in memory, and no OOM error should be thrown...
unless there
>
>
>
>
> and, at some point during a transaction of ~1GB, I get this error:
>
> [15:01:23,559][SEVERE][client-connector-#134][] JVM will be halted
> immediately due to the failure: [failureCtx=FailureContext
> [type=CRITICAL_ERROR, err=
=FailureContext
[type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException: Out
of memory in data region [name=default, initSize=1.0 GiB, maxSize=1.0 GiB,
*persistenceEnabled=true*] Try the following:
^-- Increase maximum off-heap memory size
(DataRegionConfiguration.maxSize
Hello!
It seems that you have run out of available memory. I.e., your operating
system could not allocate more memory even though the demand was still in
the range permitted by data region configuration. How much RAM do you have
on that machine?
That you still have heap left is irrelevant here
Hello,
I recently ran into an out-of-memory error on a durable persistent cache I
set up a few weeks ago. I have a single node, with durable persistence
enabled, as well as WAL archiving. I'm running Ignite ver.
2.8.1#20200521-sha1:86422096.
I looked at the stack trace, but I couldn't get
On of these, depending on your query type:
* new ScanQuery() { PageSize = 5 }
* new SqlQuery() { PageSize = 5 }
On Wed, Oct 28, 2020 at 5:24 PM Ravi Makwana
wrote:
> Hi Paval,
>
> As we are not setting explicitly QueryBase.Pagesize for SqlQuery and
> SqlFieldQuery so default will be used as
Hi Paval,
As we are not setting explicitly QueryBase.Pagesize for SqlQuery and
SqlFieldQuery so default will be used as 1024.
We have not found so far any example by looking the same we can try to
explicitly set the Query base.Pagesize.
Can we have any reference by checking that we can try to
On Wed, 28 Oct, 2020, 5:55 pm Pavel Tupitsyn, wrote:
> I found a bug in Ignite [1] which probably causes the issue on your side.
>
> Looks like you are running a query (is it a ScanQuery or SqlQuery?) and
> the size of one results page exceeds 2GB.
> Please try using a smaller value for
I found a bug in Ignite [1] which probably causes the issue on your side.
Looks like you are running a query (is it a ScanQuery or SqlQuery?) and the
size of one results page exceeds 2GB.
Please try using a smaller value for *QueryBase.PageSize*.
If you use the default value of 1024, your cache
Hi,
Our service is running with 64 bit and we have verified the same in our app
server too.
Any finding from the logs?
Is there any way to replicate it?
Thanks,
On Wed, 28 Oct 2020 at 15:47, Pavel Tupitsyn wrote:
> Looks like the app is running in 32 bit mode, which can't use more than
>
Looks like the app is running in 32 bit mode, which can't use more than 2GB
of memory.
JVM and memory regions pre-allocate all of it, leaving nothing for .NET to
use.
Please check the `Platform` column in the Task Manager - does it say `32
bit`?
If yes, then try disabling `Prefer 32 bit` in the
Hi Ravi,
The exception indicates that Ignite.NET failed to allocate unmanaged memory
on .NET side while trying to pass query data from Java to .NET.
This indicates that your system has run out of memory. Possible reasons are:
* Memory is consumed by other apps
* Memory is consumed by this app
Hi,
We are using Apache Ignite 2.7.0 binary and servers are using Linux OS &
app servers are using Windows OS.We are using Apache Ignite .Net APIs.
Recently we have noticed that our application is stopping due to a client
node throwing Out Of Memory error which we have seen in ignite
client
Igniters,
I've been with the community for more than 5 years and would say that
out-of-memory issues are among the most notorious ones, as well as
unexpected ones, that are reported by fellow Ignite developers. Even if we
do a great job estimating a required cluster capacity, the data volume can
t;> I have spent some more time on the reproducer. It is now very simple and
>> reliably reproduces the issue with a simple loop adding slowly growing
>> entries into a cache with no continuous query ro filters. I have attached
>> the source files and the log I obtain when running it.
s and the log I obtain when running it.
>
> Running from a clean slate (no existing persistent data) this reproducer
> exhibits the out of memory error when adding an element 4150 bytes in size.
>
> I did find this SO article (
> https://stackoverflow.com/questions/55937768/igni
Just a correction to context of the data region running out of memory: This
one does not have a queue of items or a continuous query operating on a
cache within it.
Thanks,
Raymond.
On Thu, Jun 11, 2020 at 4:12 PM Raymond Wilson
wrote:
> Pavel,
>
> I have run into a different
Pavel,
I have run into a different instance of a memory out of error in a data
region in a different context from the one I wrote the reproducer for. In
this case, there is an activity which queues items for processing at a
point in the future and which does use a continuous query, however
persistence.
>
> Thanks,
> Pavel
>
> On Tue, May 12, 2020 at 12:23 PM Raymond Wilson <
> raymond_wil...@trimble.com> wrote:
>
>> Well, it appears I was wrong. It reappeared. :(
>>
>> I thought I had sent a reply to this thread but cannot find it, so I am
>
gt;
> I thought I had sent a reply to this thread but cannot find it, so I am
> resending it now.
>
> Attached is a c# reproducer that throws Ignite out of memory errors in the
> situation I outlined above where cache operations against a small cache
> with persistence enabled.
Well, it appears I was wrong. It reappeared. :(
I thought I had sent a reply to this thread but cannot find it, so I am
resending it now.
Attached is a c# reproducer that throws Ignite out of memory errors in the
situation I outlined above where cache operations against a small cache
t;> How are you loading the data? Do you use putAll or DataStreamer?
>>>>>>
>>>>>> Evgenii
>>>>>>
>>>>>> ср, 4 мар. 2020 г. в 15:37, Raymond Wilson <
>>>>>> raymond_wil...@trimble.com>:
>>>>>>
>>>&g
gt;>>>>> There are two processes interacting with the cache. One process is
>>>>>> writing
>>>>>> data into the cache, while the second process is extracting data from
>>>>>> the
>>>>>> cache using a continuous query. The pr
rowing the exception.
>>>>
>>>> Increasing the cache size further to 256 Mb resolves the problem for
>>>> this
>>>> data set, however we have data sets more than 100 times this size which
>>>> we
>>>> will be processing.
>>&g
e problem for this
>>> data set, however we have data sets more than 100 times this size which
>>> we
>>> will be processing.
>>>
>>> Thanks,
>>> Raymond.
>>>
>>>
>>> On Thu, Mar 5, 2020 at 12:10 PM Raymond Wilson <
g.
>>
>> Thanks,
>> Raymond.
>>
>>
>> On Thu, Mar 5, 2020 at 12:10 PM Raymond Wilson <
>> raymond_wil...@trimble.com>
>> wrote:
>>
>> > I've been having a sporadic issue with the Ignite 2.7.5 JVM halting due
>> to
>> >
ich we
> will be processing.
>
> Thanks,
> Raymond.
>
>
> On Thu, Mar 5, 2020 at 12:10 PM Raymond Wilson >
> wrote:
>
> > I've been having a sporadic issue with the Ignite 2.7.5 JVM halting due
> to
> > out of memory error related to a cache with persis
ing due to
> out of memory error related to a cache with persistence enabled
>
> I just upgraded to the C#.Net, Ignite 2.7.6 client to pick up support for
> C# affinity functions and now have this issue appearing regularly while
> adding around 400Mb of data into the cache which is
I've been having a sporadic issue with the Ignite 2.7.5 JVM halting due to
out of memory error related to a cache with persistence enabled
I just upgraded to the C#.Net, Ignite 2.7.6 client to pick up support for
C# affinity functions and now have this issue appearing regularly while
adding
7.5 with C# client.
>
> I have an error where Ignite throws an out of memory exception, like this:
>
> 2020-03-03 12:02:58,036 [287] ERR [MutableCacheComputeServer] JVM will be
> halted immediately due to the failure: [failureCtx=FailureContext
> [type=CRIT
I'm using Ignite v2.7.5 with C# client.
I have an error where Ignite throws an out of memory exception, like this:
2020-03-03 12:02:58,036 [287] ERR [MutableCacheComputeServer] JVM will be
halted immediately due to the failure: [failureCtx=FailureContext
[type=CRITICAL_ERROR, err=class
I am not allocating large memory. I just want to allocate at least 2gb of
RAM from available 22GB free space.
I will try with 64-bit JVM.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello!
Please ensure that you are using 64 bit JVM. You may not be able to
allocate large chunks of memory when using 32-bit VM:
>>> OS name: Windows Server 2008 R2 6.1 *x86*
Regards,
--
Ilya Kasnacheev
пт, 10 янв. 2020 г. в 08:45, Tunas :
> Can someone please give me some input. I am stuck
Can someone please give me some input. I am stuck on this since last 2 days.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Now as soon i increase 2gb onward it throws out of memory exception
class Program
{
public static IIgnite IgniteServer;
gt;
> Thanks,
>
>
>
> Nadav
>
>
>
> *From:* Ilya Kasnacheev
> *Sent:* Wednesday, February 20, 2019 12:26 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Apache Ignite starts fast and then become really slow with
> out of memory
>
>
>
>
Hey,
Which thread pools are responsible for the compute jobs and cache operations ?
Thanks,
Nadav
From: Ilya Kasnacheev
Sent: Wednesday, February 20, 2019 12:26 PM
To: user@ignite.apache.org
Subject: Re: Apache Ignite starts fast and then become really slow with out of
memory
t;
>
>
> *From:* Ilya Kasnacheev
> *Sent:* Monday, February 18, 2019 11:39 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Apache Ignite starts fast and then become really slow with
> out of memory
>
>
>
> Hello!
>
>
>
> I recommend starting from
nt: Monday, February 18, 2019 11:39 AM
To: user@ignite.apache.org
Subject: Re: Apache Ignite starts fast and then become really slow with out of
memory
Hello!
I recommend starting from simplest configuration to more complex one. Remove
onheap cache, see if problem goes away.
Regards,
will help you to identify my problem more precisely ? (GC Logs,
> behavioral code, model code) ?
>
>
>
> Thank you very much.
>
>
>
> Nadav
>
>
>
> *From:* Ilya Kasnacheev
> *Sent:* Friday, February 15, 2019 4:33 PM
> *To:* user@ignite.apache.org
&
code) ?
Thank you very much.
Nadav
From: Ilya Kasnacheev
Sent: Friday, February 15, 2019 4:33 PM
To: user@ignite.apache.org
Subject: Re: Apache Ignite starts fast and then become really slow with out of
memory
Hello!
As far as I can see, the highlighted class does not dominate
Hello!
As far as I can see, the highlighted class does not dominate heap in any
meaningful way.
You seem to have huge LinkedHashMaps, any idea where they are used/held?
Regards,
--
Ilya Kasnacheev
чт, 14 февр. 2019 г. в 22:11, :
> Hey,
>
>
>
> Guys is this normal…?
>
>
>
>
>
> So many
Hey,
Guys is this normal…?
So many binary readers are opened without reuse? I saw an improvement issue on
this https://jira.apache.org/jira/browse/IGNITE-5721
I'm running a simple compute tasks that split around 4K ids to compute job that
each one takes from cache bulk of 500
Hi Shawn,
The OOM error occurs on remote server node, there were not sufficient
memory to process request,
but other threads were not affected by this.
Looks like, Ignite was able to recover from the error as it was suppressed
and reply to client has been sended.
On Mon, Apr 2, 2018 at 8:22
Hi Andrey,Thanks for your replay. It still confused me for:1) for storm worker process, If it is because of OOM and crashed. it should dump the heap. for I set -XX:+HeapDumpOnOutOfMemoryError but it didn't. For storm worker, it behaves like a normal fatal error which make
Hi Shawn,
1. Ignite use off heap to store cache entries. Client store no cache data.
Cache in LOCAL mode can be used on client side, and it should use offheap
of course.
All data client retreive from server will be in offheap
2. It is not IgniteOutOfMemory error, but JVM OOM.
So, try to
Hi,My Ignite client heap OOM yesterday. This is the first time we encounter this issue.My ignite client colocates within Storm worker process. this issue cause storm worker restart.I have several questions about it: our ignite version is 2.3.01) if ignite in client mode, it use
Thanks Val.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
"Optimal consumption" doesn't mean that you give high ingestion throughput
for free. Data streamer is highly optimized for a particular use case and if
you try to achieve same results with putAll API, you will likely get worse
consumption.
If low memory consumption is more important for you than
Thanks Denis, For now, I will increase the memory.
But for records, I will quote the Java doc comments from the interface
IgniteDataStreamer interface
* Data streamer is responsible for streaming external data into cache. It
achieves it by
* properly buffering updates and properly mapping keys
Hi Denis,
Thanks for the reply, yes I am using Streamer but the whole point of using
the streamer is that its the best api available for optimum memory
utilization rather than a putAll.
I am currently running on 1G, if I abruptly increase it to 5G then whats the
importance of a Streamer in such
Hi!
I see in the CSV, that you provided, that there are a lot of BinaryObjects
on heap. Looks like you are streaming data, using DataStremer, and batches
of it are stored on heap.
I don't see the reason of your confusion. The app consumed more memory,
than during last 6 months, so it failed with
Hi All,
Any reasoning on the same ? Let me know if any more detail are required.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Denis,
I agree that 1GB is less and that is a constraint I have and it has been
running just fine with same load. Did you get a chance to look at the csv
files I have uploaded after analyzing heap dump ?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi!
Looks like 1 GB of heap is not enough for your app to work normally. Give
it more memory, it's a Java application after all :)
It's true, that persistence helps you forget about eviction policy and your
data is safe, even when memory comes to an end. But this mechanism affects
only off-heap
Hi All,
Ignite has thrown an OutOfMemory Error, the stack of which is mentioned
below
2017-12-29 10:19:11,655 ERROR
[tcp-disco-sock-reader-#40%63d769bd-bea6-450e-ab8b-0697da94ef1e%] {}
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - Runtime error caught
during grid runnable execution:
Hi Dmitriy,
Looks like you configured memory policy and memory configuration that are
not used by your caches. Also, it looks like your cache use default memory
configuration, try to add to the memoryConfiguration and to the memoryPolicy configuration
Evgenii
2017-12-01 16:42 GMT+03:00 Alexey
Hi,
You identified the problem right: there is not enough memory to handle 15G
of Postgres data on your server. Your idea to configure a memory policy to
increase available memory is right but 16G is also not enough. Ignite data
size is noticeably larger (up to 3 times, depends on many factors
Hi Ignite team,
My cluster is a windows server with 32 gb RAM (24 free). I built project in
gridgain.console and use default properties for my project (only change
Query parallelism parameter). When I run my project in IDEA I have next
error log:
[18:13:20] Ignite node started OK (id=7598c95e,
Hello Amit,
There are the plans to make the cluster to heal itself by kicking off unstable
nodes or unblocking pending transactions if an abnormal situation happens:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-5+Cluster+reaction+if+node+detects+an+extraordinary+situations
Created a
Hello!
My recommendation here is to always leave some extra RAM and heap so that a
hot spot won't cause OOM. Maybe use less RAM-intensive algorithms.
Without stack traces and logs it's hard to say more, but OOM may not be a
recoverable error with Ignite.
Regards,
--
Ilya Kasnacheev
Hi Ilya,
Thanks for the response.
I have been following the release notes for every release - 2.1/2.2/2.3. I
haven't seen any fixes around this (or similar sounding) issue. Since I am
using Ignite is a very critical application, I would like to use a stable
version which meets my requirements. I
Hello!
I would recommend using 2.2 or 2.3 and not 2.0.
Having said that, it makes sense to avoid OOM because in many places
behavior is undefined once you hit OOM. It should not be hard to avoid.
It should not cause cluster to hang, but without logs from server nodes it's
hard to understand
Hi,
I am using Ignite 2.0. I have observed that if there is an out of memory
error on any Ignite client node, the complete cluster becomes unresponsive.
A few details about my caches/operations -
1. Atomicity mode - Transactional
2. Locking - Pessimistic with repeatable read.
Is this expected
You may be hitting this scenario from my experience:
As you have three nodes you begin to get deadlocks between load tasks.
These deadlocks cause tasks to be postponed, but real troubles happen when
they survive past 30 seconds and are dumped to logs. There's massive amount
of data in your tasks
I'm running compute tasks to load csv files into an a cache. Program runs
fine on one node but I get out of memory errors (Java heap) when running on
3 vms. It's 2.2 and I've tried various config options. If I comment out the
stream use then Ignite still seems to stop after several 1000 tasks
}
*From:* Raymond Wilson [mailto:raymond_wil...@trimble.com]
*Sent:* Wednesday, October 18, 2017 1:02 PM
*To:* user@ignite.apache.org
*Subject:* Running out of memory using ScanQuery. Memory leak?
I have run into something odd using ScanQuery (using Ignite 2.2, C# client
and native
I have run into something odd using ScanQuery (using Ignite 2.2, C# client
and native persistence). I have a fairly simple piece of scanning code
below, it simple looks through all the keys in a cache and emits them to a
file.
It creates a ScanQuery, sets its page size to 1, and explicitly
Thanks for the quick response.
how many nodes you have on the one machine? we have single ignite node on
the machine.
How many visor clients on it machine? we mostly monitor ignite using
ignitevisor, and it might have happened the visor client did not shut down
correctly.
I think, it could be
This error is not about heap memory, it indicates that you started more
threads within one process that it's allowed by OS. On Linux you can play
with ulimit to overcome the issue.
Also, how many nodes you have on the one machine? How many visor clients on
it machine? Could you share logs from
Hello igniters,
I see the following warning in the log -
[17:23:29,070][WARN ][main][TcpCommunicationSpi] Message queue limit is set
to 0 which may lead to potential OOMEs when running cache operations in
FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and
receiver
Sorry my bad. Example also talking in terms of number of entries.
Thanks for help.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Out-Of-Memory-tp13829p13969.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
comes out to be 1.5MB
which is too small to get OOME with 350 cache definition.
In order to define correct configuration can you please clarify again?
Thanks,
-Sam
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Out-Of-Memory-tp13829p13968.html
Sent from the
r.
Thanks,
-Sam
--
View this message in context: http://apache-ignite-users.
70518.x6.nabble.com/Out-Of-Memory-tp13829p13878.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
I will try it out. One question -
Default 1_500_000 means number of entries in map or bytes or KB or MB?
Document does not clarifies this either.
Thanks,
-Sam
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Out-Of-Memory-tp13829p13878.html
Sent from the Apache
Hi,
We are using Ignite 1.9, OFFHEAP with SWAP disabled. We are creating caches
programmatic and want to use SQL.
At one instance creating 350 empty cache ran into out-of-memory. We are
already setting low queue size for delete history
(IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE=100)
Log metrics
:
http://apache-ignite-users.70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-Job-finished-Cause-out-of-memory-tp8934p9004.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
job on server node?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-Job-finished-Cause-out-of-memory-tp8934p9001.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
ot; ")));
>
>IgniteFuture future = compute.future();
>future.cancel();
>
>getIgnite().close();
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-Job-finished-Cause-out-of-memory-tp8934p8947.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
ew this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-Job-finished-Cause-out-of-memory-tp8934p8947.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
e.io/v1.7/docs/performance-tips#
> configure-thread-pools
>
>
> vdpyatkov wrote
> > Hi Alex,
> > I think, these threads are executing into pools of threads, and number of
> > threads always restricted by pool size[1].
> > You can configure sizes manually:
> >
>
gt;
>
> [1]:
> https://apacheignite.readme.io/v1.7/docs/performance-tips#configure-thread-pools
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-Job-finished-Cause-out-of-memory-tp8934p8939.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
://apache-ignite-users.70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-Job-finished-Cause-out-of-memory-tp8934p8938.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
undreds of jobs , the huge amount of
threads will cause out of memory.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-Job-finished-Cause-out-of-memory-tp8934.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
distributed lock. Hoping the system property
suggested is not only limited for ATOMIC.
Thanks,
-Sam
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Out-of-memory-tp7995p8041.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
same issue.
Any more hints or tweaks.
Thanks,
-Sam
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Out-of-memory-tp7995p8026.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
essage in context:
http://apache-ignite-users.70518.x6.nabble.com/Out-of-memory-tp7995.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
You can use loadCache() method for bulk loading. It allows to provide a set
of optional parameters that can be used to specify different conditions,
like time ranges.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/What-will-happend-in-case-of-run-out
).
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/What-will-happend-in-case-of-run-out-of-memory-tp4446p4888.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
ion policy protect against out of memory (data always may be loaded
> to cache). Am I ok ?
>
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/What-will-happend-in-case-of-run-out-of-memory-tp4446p4878.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Ok,
so eviction policy allows configure what data should be dropped out from
cache. If it is need to get dropped data, they will be loaded from datastore
(for example postgresql).
1. Eviction policy protect against out of memory (data always may be loaded
to cache). Am I ok ?
--
View
Wait,
I don't understand..
I am writing data with write-through.
What in case when memory run out ?
These data are not persist in postgres ?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/What-will-happend-in-case-of-run-out-of-memory-tp4446p4450.html
Sent
Hi,
If OOM happens while persisting data, on example in JDBC driver, operation
will fail and update will be lost.
2016-04-22 12:15 GMT+03:00 tomk <rrrtomtom...@gmail.com>:
> Hello,
> I consider what will happen in case of out of memory ?
> I mean write-through mode. My data will
Hello,
I consider what will happen in case of out of memory ?
I mean write-through mode. My data will lost ? I assume that it always save
it into underlying database.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/What-will-happend-in-case-of-run-out
100 matches
Mail list logo