On Sat, Mar 21, 2020 at 1:32 AM Ben Cooksley <bcooks...@kde.org> wrote: > > Comments welcome. Please note that simply fixing the dependency > breakage in this case is not enough to resolve this - there are > underlying issues which need to be addressed here. > > Regards, > Ben Cooksley > KDE Sysadmin
I cannot comment as to whether or not this is a pattern of behaviour or just a few isolated instances. From a technical perspective I feel there are two (additional) underlying issues worth addressing here: 1. This could be prevented for the most part by having CI run before, and not after the fact. I.e. prior to merging code. 2. Different projects have different CI needs, and it would help if a project could safely manage their CI environment "on their own" as much as possible. The current system requires a lot of daunting (possibly otherwise unnecessary) complexity purely to manage the fact that a builder image will be used not just for one project but for perhaps the whole of KDE. As for running the CI beforehand: by this I mean that it is relatively easy to 'break the build' one way or another and for this not to be caught during review, especially if a change is complex and has been in development for a while with many revisions. Certainly this is a particularly severe case of breaking the build, but the same should also apply to e.g. tests that start to fail. On the day job I find that it is absolutely routine for commits to be pushed to feature branches for review which don't even pass the CI sanity check. This is not because people are lazy, but because of the perennial pitfall of "works on my machine" (local config vs. remote), and sometimes because people want to get early review on code structure for example. I think it would help massively for CI to be run on feature branches prior to merging as a pre-requirement for merging. This should be automatic and not require any special care on the part of reviewers, and the result of the CI should be immediately visible as part of the review tooling (workflow UI/UX). After that it is mostly a process of unlearning to commit to master directly. This should be fairly easy to implement for invent.kde.org, in the sense that Gitlab offers such CI as a feature out of the box and it is fully integrated with the MR workflow. Regarding your point about changing dependencies and the need for communication to manage the CI environment, to the extent that this breaks the build this could be simplified if it were easier to manage the build environment yourself (from a project perspective). This brings to my second point: I think it would be desirable for projects to be able to cater for their CI needs on their own as much as (safely) possible. I understand why you would want to avoid proliferation of too many CI setups but as the same time having a single image that contains everything + kitchen sink also leads to problems. In particular it becomes hard to ensure the image and versions of tooling or pre-installed libraries are compatible with every project, to keep this up to date and to ensure that this all remains consistent with the CI aims of a particular project. For instance a KF5 framework might want to validate that they can build and pass tests against *both* an ancient version of Qt (for stability promises) and the most recent Qt release as well (and possibly even Qt from master to get a heads up of forwards compatibility issues). A Plasma Mobile app might have rather different CI needs: it matters that both 64 and 32 bit builds are produced, both as Android apps and flatpaks. A project might want to validate that it builds against both master versions of KF5 frameworks and those more typically found on a typical distro. This sort of per-project complexity is hard to deal with in one big builder image, but something we ideally would be able to pull off for all our projects as a matter of routine. I think we already run into the complexity problems pretty hard with the current CI setup in that sense: taking binary factory as an example, it is quite obvious that the JSON blob encoding dependency information for Android apps is something that came out of a need for "yet another" layer of abstraction to manage CI needs of many different projects, and that this is hard to grasp, comprehend fully let alone maintain: there is an ever-growing list of ad-hoc helper scripts needed to keep the JSON blob manageable, and even so it is not very easy to follow if a project has to pull in multiple dependencies. For comparison I would ask: how many people understand how all of the following actually work: - The various instructions in Docker files - The related resources copied into the images, in particular the shell scripts - The Python scripts that do much of the heavy lifting - The Jenkins groovy DSL scripts that actually make this work on Jenkins - The repository metadata, and dependency information files - The various data files that encode crucial config data/build steps that drive the various layers of scripts at run time - CMake of the project including any settings that might be passed through e.g. environment variables ... - How this all interacts with cross compiling - What, exactly the nested build environments become when you take into account the habit of executing commands by eval'ing them inside shell I'm sure there are a fair few heroes among us who can keep this ticking, but just the simple act of typing it out struck me powerfully: *this is hard*. This is just the stuff I am aware of now, and the next time I need to dig into things I will probably have forgotten more than half of that. Again Gitlab (and therefore invent.kde.org) has a feature that allows for this (the `image` directive in .gitlab-ci.yml files). I think it would be very nice if projects could work with custom CI images. To guard against malicious or compromised images, I suppose these would have to come from a trusted registry, and that the instructions/code for building these should be managed as part of a KDE project (including review) as well. Regards, - Johan