https://bugs.kde.org/show_bug.cgi?id=404057

--- Comment #28 from Kai Krakow <k...@kaishome.de> ---
(In reply to Kai Krakow from comment #19)
> After testing this a few days, with my patches it works flawlessly: No
> performance impact, krunner finds result immediately without thrashing the
> HDD, etc. That is, until you reboot: While with the patches it has no longer
> any perceived negative impact on desktop responsiveness, I see that Baloo
> still re-indexes all files.
> 
> Steps to reproduce:
> 
> 1. Start baloo for the first time (aka "remove your index")
> 2. Let it complete a full indexing cycle
> 3. Observe "balooctl indexSize" to see that there's no file left to index
> 4. Also observe that it has data in "ContentIndexingDB"
> 5. Reboot
> 6. Observe "balooctl indexSize" to see that "ContentIndexingDB" changed back
>    to 0 bytes
> 7. Observe "balooctl status" and wait until it finished checking for missing
>    files
> 8. Observe both commands to see how baloo now refills the ContentIndexingDB
>    and adds all your files to the index again, resulting in double the amount
>    of files after finishing the second run
> 9. From now on, on every reboot, the index only grows, constantly forgetting
>    the ContentIndexDB and re-adding all files.
> 
> This behavior is with and without my patches.
> 
> @Martin Is this what you've seen, too?

Following up with more details:

The problem seems to be the following:

After reboot, the indexer finds all files as changed. For every file in the
index, it will log to stdout/stderr:

"path to file" id seems to have changed. Perhaps baloo was not running, and
this file was deleted + re-created

This results in a lot of transactions to the database, blowing it up in size
(due to its lock-free implementation it appends to the database before freeing
the old data) and creating high IO pressure due to a lot of fsync calls going
on in short succession.

What follows: After removing all the seemingly changed files from the database,
it re-indexes all those files. This in turn appends to the database again it
seems, probably because its unlikely to find big enough "holes" at the
beginning of the database: Although a lot of data has been removed, it has
probably been filled with meta data updates, leaving no space to put back in
the content index data.

This access patterns adds up after some time, spreading out data more and more,
leading to very random access patterns. The kernel will start to struggle with
the mmap because it constantly swaps in new pages: Due to the random and wide
spread access patterns, it becomes more and more less likely that memory pages
are already swapped in. Access behavior becomes more and more seeky, the
database contents could be said to be too fragmented. This will introduce high
desktop latency because baloo will start to dominate the cache with its mmap.
After all, we should keep in mind that LMDBs design is made for systems
primarily running only the database, but not mixed with desktop workloads.

The phabricator site already has some valuable analysis and ideas of this which
I collected here: https://phabricator.kde.org/T11859

Thinks that currently help here a lot:

  - Remove fsync from baloo via patch (this seems to have the biggest impact)
  - Limiting the working set memory baloo can use by using cgroups

Removing fsync from baloo could mean that the database is not crash-safe. Thus,
I suggest to not use my fsync patch upstream until extensive testing of such
situations (I do not know how to do that, it's a tedious task depending on a
vast amount of factors) or until someone comes up with some clever
recovery/transaction idea. Maybe the LMDB author has some more insight on this.

Limiting memory with cgroups helps because cgroups can limit RAM usage by
accounting for both heap and cache usage: It effectively hinders baloo from
dominating the cache and thus impacting desktop performance too much. The
read-ahead patch reduces additional pressure on the cache occupancy. It should
also be possible to use madvise()/fadvise() to actively instruct the kernel
that baloo no longer uses some memory or doesn't plan on doing so in the
future. I'm not sure if baloo and/or LMDB use these functions or how they use
them.

Also, I wonder if LMDB uses MAP_HUGETLB. It may be worth checking if flipping
this setting improves or worsens things because I can think of different
scenarios:

 1. hugetlb uses 2 MB page size which could reduce the IOPS needed to work
    on spatially near data in the DB (good)
 2. 2 MB page size could increase the IO throughput needed when paging DB data
    in, thus negatively impacting the rest of the system (bad)
 3. LMDB should grow the database in bigger chunks to reduce external
    fragmentation of the index file, hugetlb could help that (undecided)
 4. hugetlb with very random, spread-out access patterns could increase the
    memory pressure (bad)
 5. 4k pages with very random access patterns could reduce memory pressure
    (good)
 6. hugetlb would improve sequential access patterns by reducing IOPS pressure
    (undecided)
 7. hugetlb would reduce TLB lookups in the processor, said to have an up
    to 10% performance improvement of memory intensive workloads (good)
 8. hugetlb can introduce allocation stalls which leads to very perceivable
    lags in desktop performance because the kernel more likely has to
    defragment memory to huge page allocations (bad)
 9. There are system out there that support 1GB page size, we definitely don't
    want that - it would effectively lock the whole DB into memory (bad)

Maybe, and we can see discussions around this topic in phabricator, it makes
sense to put more effort in designing the database scheme and access patterns
more around on of the above scenarios and optimize for on of these, whichever
fits better. The current consensus seems it isn't much optimized around any
design pattern - it just does it's thing. And of course fix some bugs like the
one discarding all the contents on each reboot. But that seems to be tightly
coupled with some of the design decisions that went into the database scheme.

Also, i.e. in the bees project (https://github.com/Zygo/bees), the author found
that using mmap without memlock has very bad performance on a system busy with
other tasks. So he decided to lock anonymous memory, and use a writeback thread
which writes data back in big enough chunks to not introduce too much
fragmentation. LMDB currently does a similar thing: Writing to the database is
not done through the mmap. But memory isn't locked. But locking memory isn't an
option here as the DB potentially gets bigger than what the systems RAM size
is.

In this light, I wonder if it's possible to LMDB (or some other DB engine) to
be mmap based but use some sort of journal: Locking the journal into memory
(mmap-backed) and using a writeback thread writing the journal to the DB at
regular intervals could work very well. It would double the amount of data
written, tho. Baloo already seems to follow a similar idea: It batches updates
into transactions of 40 files. But that approach is not optimal from various
perspectives. I'd like to work on this aspect next.

-- 
You are receiving this mail because:
You are watching all bug changes.

Reply via email to