Hello!

We had a per-page scanning of caches once, but it is disabled for some time
because it was causing synchronization issues.

Apache Ignite is still a memory-centric database, which assumed that data
is either in memory or may be loaded to memory relatively quickly.

So I guess the only cache scan option currently is by reading all blocks in
random.

We also assume that persistence setup uses SSD, which has random read
speeds on par with sequential (and the term  "sequential" may not be
applicable to SSD at all). If your setup is based on HDD it may indeed not
work optimally.

Regards,
-- 
Ilya Kasnacheev


вт, 13 апр. 2021 г. в 21:28, Sebastian Macke <[email protected]>:

> Hi Ignite Team,
>
> I have stumbled across a problem when iterating over a persistence cache
> that does not
> fit into memory.
>
> The partitioned cache consists of 50M entries across 3 nodes with a total
> cache size of 3*80GB on the volumes.
>
> I use either a ScanQuery or a SQL query over a non-indexed table. Both
> results are the same.
>
> It can take over an hour to iterate over the entire cache. The problem
> seems
> to be that the cache is read in random 4kB (page size) chunks
> unparallelized
> from the volume. A page size of 8kB exactly doubles the iteration speed.
>
> Is this Ignite's default behaviour? Is there an option to enable a more
> streaming like solution?
> Of course, the order of the items in the cache doesn't matter.
>
> Thanks,
>
> Sebastian
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to