Yes, it's about Page Memory defragmentation.

Pages in partitions files are stored sequentially, possible, it makes
sense to defragment pages first to avoid interpages gaps since we use
pages offset to manage them.

I filled an issue [1], I hope we will be able to find resources to
solve the issue before 2.8 release.

[1] https://issues.apache.org/jira/browse/IGNITE-10862

On Sat, Dec 29, 2018 at 10:47 AM Павлухин Иван <vololo...@gmail.com> wrote:
>
> I suppose it is about Ignite Page Memory pages defragmentation.
>
> We can get 100 allocated pages each of which becomes only e.g. 50%
> filled after removal some entries. But they will occupy a space for
> 100 pages on a hard drive.
>
> пт, 28 дек. 2018 г. в 20:45, Denis Magda <dma...@apache.org>:
> >
> > Shouldn't the OS care of defragmentation? What we need to do is to give a
> > way to remove stale data and "release" the allocated space somehow through
> > the tools, MBeans or API methods.
> >
> > --
> > Denis
> >
> >
> > On Fri, Dec 28, 2018 at 6:24 AM Vladimir Ozerov <voze...@gridgain.com>
> > wrote:
> >
> > > Hi Vyacheslav,
> > >
> > > AFAIK this is not implemented. Shrinking/defragmentation is important
> > > optimization. Not only because it releases free space, but also because it
> > > decreases total number of pages. But is it not very easy to implement, as
> > > you have to both reshuffle data entries and index entries, maintaining
> > > consistency for concurrent reads and updates at the same time. Or
> > > alternatively we can think of offline defragmentation. It will be easier 
> > > to
> > > implement and faster, but concurrent operations will be prohibited.
> > >
> > > On Fri, Dec 28, 2018 at 4:08 PM Vyacheslav Daradur <daradu...@gmail.com>
> > > wrote:
> > >
> > > > Igniters, we have faced with the following problem on one of our
> > > > deployments.
> > > >
> > > > Let's imagine that we have used IgniteCache with enabled PDS during the
> > > > time:
> > > > - hardware disc space has been occupied during growing up of an amount
> > > > of data, e.g. 100Gb;
> > > > - then, we removed non-actual data, e.g 50Gb, which became useless for
> > > us;
> > > > - disc space stopped growing up with new data, but it was not
> > > > released, and still took 100Gb, instead of expected 50Gb;
> > > >
> > > > Another use case:
> > > > - a user extracts data from IgniteCache to store it in separate
> > > > IgniteCache or another store;
> > > > - disc still is occupied and the user is not able to store data in the
> > > > different cache at the same cluster because of disc limitation;
> > > >
> > > > How can we help the user to free up the disc space, if an amount of
> > > > data in IgniteCache has been reduced many times and will not be
> > > > increased in the nearest future?
> > > >
> > > > AFAIK, we have mechanics of reusing memory pages, that allows us to
> > > > use pages which have been allocated and stored removed data for
> > > > storing new data.
> > > > Are there any chances to shrink data and free up space on disc (with
> > > > defragmentation if possible)?
> > > >
> > > > --
> > > > Best Regards, Vyacheslav D.
> > > >
> > >
>
>
>
> --
> Best regards,
> Ivan Pavlukhin



-- 
Best Regards, Vyacheslav D.

Reply via email to