On 07/11/2018 07:31 PM, Andrew Lutomirski wrote:
> On Wed, Jul 11, 2018 at 10:08 AM, Mikolaj Izdebski <mizde...@redhat.com>
>> On 07/11/2018 06:37 PM, Andrew Lutomirski wrote:
>>> From my perspective as an occasional Fedora packager, I'm regularly
>>> surprised by just how long it takes for Koji builders to install
>>> dependencies. I've never tried to dig in too far, but it looks like the
>>> builders download package metadata, download packages, and then install
>>> things. Surely this could be massively optimized by having the metadata
>>> pre-downloaded (at least when side tags aren't involved) and by having the
>>> packages already present over NFS or similar.
>> Koji gets repodata and packages from HTTP servers, through caching
>> proxies located in the same datacenters as builders. Most often used
>> packages are cached in memory, so download speeds are not a problem. At
>> least for non-s390x builders. Accessing packages directly from NFS would
>> be slower.
> I wonder if the time taken to decompress everything is relevant.
> Fedora currently uses xz, which isn't so fast. zchunk is zstd under
> the hood, which should be lot faster to decompress, especially on ARM
Repodada consumed by dnf is gzip-compressed, which is quickly
decompressible. But decompression is done in the same thread as XML
parsing and creating pool data structures, so it affects repodata
loading times to some degree.
>> The slowest parts of setting up chroot is writing packages to disk,
>> synchronously. This part can be speeded up a lot by enabling nosync in
>> site-defaults.cfg mock config on Koji builders, setting cache=unsafe on
>> kvm buildvms, or both. These settings are safe because builders upload
>> all results to hubs upon task completion. With these settings chroot
>> setup can take about 30 seconds.
> I don't suppose this could get done?
I proposed this a few years ago, but the answer was "no".
>> Once this is optimized, another slow part is loading repodata into
>> memory - uncompressing it, parsing and creating internal libsolv data
>> structures. This could be speeded up by including solv/solvx files in
>> repodata, but I think that would require some code changes.
> Hmm. On my system, there are lots of .solv and .solvx files in
> /var/cache/dnf. I wonder if it would be straightforward to have a
> daily job that updates the builder filesystem by just having dnf
> refresh metadata and generate the .solv/.solvx files? There wouldn't
> be any dnf changes needed AFAICT -- just some management on the
> builder infrastructure. This would at least avoid a bunch of
> duplicate work on most builds.
That wouldn't save much time (and would still require Koji code changes
as dnf uses different cache directories for each task). Just like
caching chroots is not effective, so Koji disables it. Most repos simply
change too often, and there are a lot (over 150) of builders. What would
help is generating solv/solvx during repo generation - builders would
download them and load very quickly. But that requires code changes and
would only save a few seconds per build.
Senior Software Engineer, Red Hat
devel mailing list -- firstname.lastname@example.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines