ymond.
>>
>>
>> On Thu, Jun 11, 2020 at 4:25 PM Raymond Wilson <
>> raymond_wil...@trimble.com> wrote:
>>
>>> Just a correction to context of the data region running out of memory:
>>> This one does not have a queue of items or a contin
s and the log I obtain when running it.
>
> Running from a clean slate (no existing persistent data) this reproducer
> exhibits the out of memory error when adding an element 4150 bytes in size.
>
> I did find this SO article (
> https://stackoverflow.com/questions/55937768/igni
instance of a memory out of error in a data
> region in a different context from the one I wrote the reproducer for. In
> this case, there is an activity which queues items for processing at a
> point in the future and which does use a continuous query, however there is
> also significant van
Pavel,
I have run into a different instance of a memory out of error in a data
region in a different context from the one I wrote the reproducer for. In
this case, there is an activity which queues items for processing at a
point in the future and which does use a continuous query, however
gt;>>> this error, but I want to minimise the in-memory size for this buffer as it
>>>> is essentially just a queue.
>>>>
>>>> The suggestion of enabling data persistence is strange as this data
>>>> region has already persistence enabled for it.
>
just a queue.
>>>
>>> The suggestion of enabling data persistence is strange as this data
>>> region has already persistence enabled for it.
>>>
>>> My assumption is that Ignite manages the memory in this cache by saving
>>> and loading values
nages the memory in this cache by saving
>> and loading values as required.
>>
>> The test workflow in this failure is one where ~14,500 objects totalling
>> ~440 Mb in size (avery object size = ~30Kb) are added to the cache, and are
>> then drained by a processors using
memory in this cache by saving
> and loading values as required.
>
> The test workflow in this failure is one where ~14,500 objects totalling
> ~440 Mb in size (avery object size = ~30Kb) are added to the cache, and are
> then drained by a processors using a continuous query. Elements ar
is one where ~14,500 objects totalling
~440 Mb in size (avery object size = ~30Kb) are added to the cache, and are
then drained by a processors using a continuous query. Elements are removed
from the cache as the processor completes them.
Is this kind of out of memory error supposed to be possible
Hi Dmitriy,
Looks like you configured memory policy and memory configuration that are
not used by your caches. Also, it looks like your cache use default memory
configuration, try to add to the memoryConfiguration and to the memoryPolicy configuration
Evgenii
2017-12-01 16:42 GMT+03:00 Alexey
Hi,
You identified the problem right: there is not enough memory to handle 15G
of Postgres data on your server. Your idea to configure a memory policy to
increase available memory is right but 16G is also not enough. Ignite data
size is noticeably larger (up to 3 times, depends on many factors
Hi Ignite team,
My cluster is a windows server with 32 gb RAM (24 free). I built project in
gridgain.console and use default properties for my project (only change
Query parallelism parameter). When I run my project in IDEA I have next
error log:
[18:13:20] Ignite node started OK (id=7598c95e,
12 matches
Mail list logo