Understood, thanks for the explanation Stephen

On Tue, Jan 19, 2021 at 10:00 AM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> Ignite *pages* its data. So if you access a record, it will transparency
> be copied from disk to memory. It doesn’t proactively go to disk and pull
> in records you might need.
>
> On 19 Jan 2021, at 16:18, Ryan Trollip <ryanonthebe...@gmail.com> wrote:
>
> Stephen
>
> It seems the problem is how ignite deals with this was not clear to me. To
> make sure I understand this correctly:
> Ignite with native persistence on, automatically overflows pages to disk
> using the least recently used policy - LRU. (which is great)
> Pages could be a partial row in a table. But that's ok because a SQL query
> that includes that will pull from RAM and disk automatically under the
> covers. i.e. it's all auto-magically done for us.
>
> What is still not clear is, as branches are deleted or we scale-up servers
> and more memory is then made available, will it rotate back into memory
> from disk what was rotated out? assuming reverse LRU order?
>
> Thanks!!
> Ryan
>
>
> On Tue, Jan 19, 2021 at 7:28 AM Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
>> As long as new branches — to use your analogy — are in memory, why does
>> it matter that a few others are too? The least recently used branches will
>> automatically (LRU) be purged from memory if space is needed for new
>> branches.
>>
>> In fact, if you’re worried about available memory, a *time based* eviction
>> policy won’t work. What if you expect to have 100 branches and size your
>> cluster appropriately. Suddenly, 1000 branches are created. Boom!
>>
>> With a space-based eviction policy — as is the default with Ignite native
>> persistence — that works just fine.
>>
>> You could create a cache with an eviction policy. When the records are
>> deleted after a week, you can have a process listening to delete events and
>> copy the record to a different cache in a small data region.
>>
>> So what you’re asking for is possible, it’s just more complicated and
>> less effective than the alternative.
>>
>> On 19 Jan 2021, at 14:04, Ryan Trollip <ryanonthebe...@gmail.com> wrote:
>>
>> Stephen
>>
>> Let's use an analogy of projects in source control. Let's say we have a
>> very active community of developers, they are creating 100 new branches a
>> day. Each branch has a few thousand objects and associated properties etc.
>> but these developers don't clean up by deleting branches.
>> We want new branches to be cached in memory and available high
>> performance read and write, but older branches to go to disk to save on
>> memory hardware needs, since many are abandoned.
>> The policy could read something like this: Branches that have not been
>> accessed in 1 week, move to disk. On branch access, if on disk, move back
>> to RAM.
>>
>> Thanks
>> Ryan
>>
>> On Tue, Jan 19, 2021 at 2:33 AM Stephen Darlington <
>> stephen.darling...@gridgain.com> wrote:
>>
>>> I guess I’m still not clear why you need to explicitly remove them from
>>> memory.
>>>
>>> By virtue of using native persistence, they’re already on disk. If you
>>> load new data, the old entries will eventually be flushed from memory (but
>>> remain on disk). What do you gain by removing entries from memory at a
>>> specific time?
>>>
>>> Regards,
>>> Stephen
>>>
>>> > On 19 Jan 2021, at 06:02, Naveen <naveen.band...@gmail.com> wrote:
>>> >
>>> > Hi Stephen
>>> >
>>> > on the same mail chain, we also data like OTP (one time passwds) which
>>> are
>>> > no relevant after a while, but we dont want to expire or delete them,
>>> just
>>> > get them flushed to disk, like wise we do have other requirements
>>> where data
>>> > is very relevant only for a certain duration, later on its not
>>> important.
>>> > Thats the whole idea of exploring eviction policies
>>> >
>>> > Naveen
>>> >
>>> >
>>> >
>>> > --
>>> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>>
>>>
>>
>>
>
>

Reply via email to