Re: KDE Frameworks with failing CI (master) (5 May 2024)

2024-05-05 Thread Ben Cooksley
On Sun, May 5, 2024 at 10:50 PM Albert Astals Cid  wrote:

> Please work on fixing them, otherwise i will remove the failing CI jobs on
> their 4th failing week, it is very important that CI is passing for
> multiple
> reasons.
>
> Good news: 1 repo was fixed
>
> Bad news: 2 repo started failing
>
> kimageformats:
>  * https://invent.kde.org/frameworks/kimageformats/-/pipelines/680495
>   * andoid fails
>* Needs a newer ECM than what the Andoid CI has
>
>
The .kde-ci.yml file for this is broken, it does not list Android as a
supported platform and therefore is receiving the ECM from the image rather
than from the CI builds directly.
Will be fixed by
https://invent.kde.org/frameworks/kimageformats/-/merge_requests/217


>
> kuserfeedback:
>  * https://invent.kde.org/frameworks/kuserfeedback/-/pipelines/680497
>   * flatpak fails
>* Needs a newer ECM than what flatpak SDK has
>
>
> Cheers,
>   Albert
>

Cheers,
Ben


Re: KDE Frameworks with failing CI (master) (28 April 2024)

2024-04-28 Thread Ben Cooksley
On Sun, Apr 28, 2024 at 9:23 PM Albert Astals Cid  wrote:

> Please work on fixing them, otherwise i will remove the failing CI jobs on
> their 4th failing week, it is very important that CI is passing for
> multiple
> reasons.
>
> Bad news: 1 repo started failing
>
> kconfig:
>  * https://invent.kde.org/frameworks/kconfig/-/pipelines/675292
>   * kconfigcore-kdesktopfiletest fails on Linux
>   * windows fails to compile
>* There have not been changes lately? Qt/ECM regressions?
>

These are not Qt regressions, as Qt is provided by the image and the
Windows image hasn't been rebuilt in weeks. CMake is the same.
They must be regressions within KConfig or ECM.


>
> Cheers,
>   Albert
>

Cheers,
Ben


CI moved to Qt 6.7 for Linux builds

2024-04-20 Thread Ben Cooksley
Hi all,

I have just flipped the switch that has moved the CI system over to using
Qt 6.7 for Linux builds on our SUSE images.

Should you see any issues with builds failing as a result of packages being
missing in the registry then please submit a merge request to
sysadmin/ci-management to ensure that build dependency is added to our seed
jobs.

I'll leave the Qt 6.6 package registry and container images in place for
another week or so then will schedule them for removal.

As part of this I have also updated the list of projects with Qt 6 only
master branches. Any residual Qt 5 build artifacts the CI system was
holding for those projects have now been purged, which may impact
downstream projects that depend on those projects that are still on Qt 5.

On an adjacent note, i'd also like to schedule removing CI support for
release/23.08 and Plasma/5.27 builds (by purging all their binaries we
currently hold) for the Qt 5.15 series.

While checking however I note that several projects still have activity on
those branches. Can we please confirm whether any further releases are
expected, as i'd prefer to remove anything that isn't being properly
maintained anymore.

Thanks,
Ben


Re: kdewebkit status

2024-04-20 Thread Ben Cooksley
On Sun, Apr 21, 2024 at 5:07 AM Ashark  wrote:

> Hello.
>

Hey Andrew,


>
> I have seen a bug that kdewebkit is failing to build (406342). I was
> looking
> why the person was building that module in the first place.
>
> Afaict, this module is not used in any project. Searching in repo-metadata
> by
> "kdewebkit" word, I see it is:
>
> Listed in `dependency-data-kf5-qt5` and in
> `dependency-data-stable-kf5-qt5`,
> without any dependency, and no any other project points to it as a
> dependency.
>
> Listed in `kf5-frameworks.ksb` to be ignored:
> ```
> module-set frameworks
> repository kde-projects
> use-modules frameworks
>
> #tag v5.75.0-rc1
> branch kf5
> ignore-modules kdewebkit kuserfeedback
> end module-set
> ```
>
> and also listed in `kf6-frameworks.ksb` as ignored:
> ```
> module-set frameworks
> repository kde-projects
> use-modules frameworks
> ignore-modules kdelibs4support kdewebkit khtml kjsembed kmediaplayer
> kinit
> kjs kross kdesignerplugin kemoticons kxmlrpcclient
> cmake-options -DBUILD_WITH_QT6=ON
> end module-set
> ```
>
> The projects-invent/frameworks/kdewebkit/metadata.yaml contains the entry
> ```
> repoactive: true
> ```
>
> Maybe this should be marked as `repoactive: false` and removed from ignore-
> modules?
>

The repoactive flag mirrors the status of the repository on Gitlab (whether
it is active or archived).

As Frameworks as a compatibility promise, this repository will need to
continue to limp around unfortunately until KF5 ceases to make releases.


>
> The project readme says that it is removed from kf6.
>
>
>
Cheers,
Ben


Re: KDE Frameworks with failing CI (master) (7 April 2024)

2024-04-10 Thread Ben Cooksley
On Wed, Apr 10, 2024 at 4:33 AM Volker Krause  wrote:

> On Sonntag, 7. April 2024 23:02:06 CEST Albert Astals Cid wrote:
> > Please work on fixing them, otherwise i will remove the failing CI jobs
> on
> > their 4th failing week, it is very important that CI is passing for
> multiple
> > reasons.
> >
> > Bad news: 3 repositories have started failing
> >
> > kconfigwidgets - NEW
> >  * https://invent.kde.org/frameworks/kconfigwidgets/-/pipelines/655246
> >   * klanguagenametest fails in Linux
> >*
> https://invent.kde.org/frameworks/kconfigwidgets/-/merge_requests/234
> >
> >
> > kcontacts - NEW
> >  * https://invent.kde.org/frameworks/kcontacts/-/pipelines/655247
> >   * AddressTest::formatTest fails in FreeBSD
>
> That's the same issue that also hit kitinerary. As I haven't gotten any
> answers for my questions about what changed on the CI there I've now
> disabled
> this test for FreeBSD for kitinerary, I can do the same for KContacts I
> guess.
>

To give a public answer about this - there was a general image rebuild to
take into account a number of updates to various libraries.
It seems something in the FreeBSD stack has gotten loose as part of this so
we'll need to do some more investigation.


>
> > kuserfeedback - NEW
> >  * https://invent.kde.org/frameworks/kuserfeedback/-/pipelines/655248
> >   * The code requires unreleased versions so flatpak fails
>
> Hm, that's a systematic problem: We cannot do Flatpak builds in a KF
> master
> branch on top of an existing runtime.
>
> Doing Flatpak builds only in the stable branch wont work here given there
> is
> no such branch. So the options I can think of are either building all KF
> dependencies explicitly here rather than using those from the runtime, or
> splitting the management/analytics tools (which is what the Flatpak is
> actually for) from the library.
>

I'd probably suggest splitting them at this stage given the issues we keep
hitting here...


>
> Regards,
> Volker


Cheers,
Ben


Re: KDE Frameworks with failing CI (kf5) (7 April 2024)

2024-04-08 Thread Ben Cooksley
On Mon, Apr 8, 2024 at 9:03 AM Albert Astals Cid  wrote:

> Please work on fixing them, otherwise i will remove the failing CI jobs on
> their 4th failing week, it is very important that CI is passing for
> multiple
> reasons.
>
> Bad news: 1 repositories is still failing and 1 has started failing
>
> kirigami - 3rd week
>  * https://invent.kde.org/frameworks/kirigami/-/pipelines/649285
>   * Android build fails
>* Something qt related needs a rebuild?
>

This will likely be resolved as part of the upcoming Craft cache rebuild
that is currently being worked on as part of rolling out Qt 6.7.
I'd estimate 1-2 weeks before we start to roll that out, at which point
this should be fixed.


>
>
> kcontacts - NEW
>  * https://invent.kde.org/frameworks/kcontacts/-/pipelines/655262
>   * AddressTest fails on FreeBSD (Same as in master)
>
>
> Cheers,
>   Albert
>

Cheers,
Ben


Re: KDE Frameworks with failing CI (kf5) (24 March 2024)

2024-03-26 Thread Ben Cooksley
On Wed, Mar 27, 2024 at 5:55 AM Volker Krause  wrote:

> On Dienstag, 26. März 2024 00:42:53 CET Albert Astals Cid wrote:
> > El dilluns, 25 de març de 2024, a les 18:03:27 (CET), Volker Krause va
> >
> > escriure:
> > > On Sonntag, 24. März 2024 23:14:12 CET Albert Astals Cid wrote:
> > > > Please work on fixing them, otherwise i will remove the failing CI
> jobs
> > > > on
> > > > their 4th failing week, it is very important that CI is passing for
> > > > multiple reasons.
> > > >
> > > > Bad news: 2 repositories have started failing
> > > >
> > > > kirigami - NEW
> > > >
> > > >  * https://invent.kde.org/frameworks/kirigami/-/jobs/1679118
> > > >
> > > >   * Android build fails
> > > >
> > > >* Something qt related needs a rebuild?
> > >
> > > Yep, looks like a version mix due to the patch collection rebase.
> >
> > But why has this happened? I mean how is it that some Qt has different
> than
> > some other Qt? Was there a rebuild of Android Qt while i was doing the
> > rebase?
> >
> > If I understand that right there's a QtSvg is 5.15.13 but QtWidgets is
> only
> > 5.15.12?
>
> Not sure what caused it specifically here, but this happens as soon as
> anything
> triggers a rebuild of a part of Qt for whatever reason. That part is then
> taken from the kde/5.15 head, which is newer than the rest of Qt in the
> cache.
>
> The effect used to spread/worsen over time as more things in the cache
> become
> outdated (not sure if that got better or worse with the significantly
> reduced
> Qt5-related activity nowadays).
>
> > > I'm wondering how we want to proceed here longer term, as this will
> > > continue to need active maintenance while most of our Android apps have
> > > meanwhile moved to Qt 6.
> > >
> > > Pin the Qt version in Craft to a fixed revision? Drop Android KF5 CI
> > > builds? Find volunteers to do the work of keeping this running/working?
> >
> > If we're saying Kirigami works on Android ideally we should keep a CI.
>
> Pinning the Qt5 version might be a good compromise then? Keeping Kirigami
> working in a fixed environment should be fine, but dealing with the
> movement in
> Qt, Android and Craft for one major version is hard enough already.
>

This issue has existed for a while now as you've pointed out.

The only real fix is for us to fully rebuild the Craft cache each time the
KDE Qt 5 Patch Collection is rebased.
Without the existence of version specific stable branches of the patch
collection for us to point Craft to, that is the best we'll be able to do
unfortunately, as currently we've basically got Craft pointed at a moving
target.

Alternatively, we can move Qt 5 support in Craft to an LTS branch which
should ensure the amount of movement in the blueprints (which triggers the
rebuilds of parts of Qt in the cache) is minimised (keeping the cache
valid, even if outdated)


>
> Regards,
> Volker
>

Cheers,
Ben


Re: KDE Frameworks with failing CI (master) (10 March 2024)

2024-03-12 Thread Ben Cooksley
On Mon, Mar 11, 2024 at 12:46 PM Albert Astals Cid  wrote:

> Please work on fixing them, otherwise i will remove the failing CI jobs on
> their 4th failing week, it is very important that CI is passing for
> multiple
> reasons.
>
> Bad news: 1 repository is still failing and 1 new has started failing
>
>
> kimageformats - 3rd week
>  * https://invent.kde.org/frameworks/kimageformats/-/pipelines/627271
>   * kimageformats-read-xcf fails in Linux CI
>* https://invent.kde.org/frameworks/kimageformats/-/merge_requests/211
> fixes it but then breaks the BSD builder (because it is on an older Qt)
> Can we
> update Qt in the BSD builder to 6.6.2?
>

Please file a ticket for that update.


>
>
> kpackage - NEW
>  * https://invent.kde.org/frameworks/kpackage/-/pipelines/627276
>   * appstream check fails
>

It would appear that this will require changes to the KPackage format to
ensure that we allow (require?) plugins to specify a homepage to comply
with Appstream requirements.


>
> Cheers,
>   Albert
>
>
>
Regards,
Ben


Re: KDE Frameworks with failing CI (master) (25 February 2024)

2024-02-26 Thread Ben Cooksley
On Mon, Feb 26, 2024 at 11:18 AM Albert Astals Cid  wrote:

> Please work on fixing them, otherwise i will remove the failing CI jobs on
> their 4th failing week, it is very important that CI is passing for
> multiple
> reasons.
>
> Good news: 1 repository has been fixed
>
> Bad news: 3 NEW repository are failing
>
>
> extra-cmake-modules - NEW
>  *
> https://invent.kde.org/frameworks/extra-cmake-modules/-/pipelines/615155
>   * "This job is stuck because the project doesn't have any runners online
> assigned to it." on the "docs" job
>

This job is no longer needed following improvements to the process that
generates api.kde.org so i've removed the job from both KF6 (master) and
KF5.


>
>
> kimageformats - NEW
>  * https://invent.kde.org/frameworks/kimageformats/-/pipelines/615158
>   * kimageformats-read-xcf fails in Linux CI
>
>
> kuserfeedback - NEW
>  * https://invent.kde.org/frameworks/kuserfeedback/-/pipelines/615161
>   * flatpak fails for versioning (why does this even have a flatpak?
> that's
> the user case for a kuserfeedback flatpak?)
>
>
> Cheers,
>   Albert
>

Cheers,
Ben


Re: KDE Frameworks with failing CI (kf5) (11 February 2024)

2024-02-10 Thread Ben Cooksley
On Sun, Feb 11, 2024 at 1:13 PM Albert Astals Cid  wrote:

> Please work on fixing them, otherwise i will remove the failing CI jobs on
> their 4th failing week, it is very important that CI is passing for
> multiple reasons.
>
> Good news: 1 repository was fixed
>
> Bad news: 2 repositories are still failing
>
>
> baloo - 3rd week
>  * https://invent.kde.org/frameworks/baloo/-/pipelines/604706
>   * Tests fail on FreeBSD
>
>
> kfilemetadata - 3rd week
>  * https://invent.kde.org/frameworks/kfilemetadata/-/pipelines/604707
>   * Tests fail on FreeBSD
>* Should we backport the fix made in KF6? Christoph?
>

Yes, it should be backported, otherwise the metadata features are broken on
FreeBSD 14+ for users.


>
>
> Cheers,
>   Albert
>

Cheers,
Ben


Re: Flatpak jobs on KDE CI vs. continuous integration on main/master/devel branches

2024-02-05 Thread Ben Cooksley
On Mon, Feb 5, 2024 at 4:28 AM Friedrich W. H. Kossebau 
wrote:

> Hi,
>
> ((cc:kde-frameworks-devel for heads-up, replies please only to
> kde-core-deve))
>
> I hit the problem that when working on a repo which would like to use
> latest
> KF development state to integrate some new KF API just added in
> cooperation
> with that very repo wanting to use it, I cannot do so when someone had
> added a
> flatpak job on CI to that repo.
>
> Because with such flatpak jobs it seems they are limiting the available KF
> version not to the current latest one, as expected for continuous
> integration,
> but some older (anywhere documented?) snapshot:
>
> "runtime-version": "6.6-kf6preview",
>
> What can be done here to reestablish the old immediate continuous
> integration
> workflow? Where new APIs (also from KF) are instantly available?
>
> Right now this is a new extra burden which makes working on new features
> with
> KF and apps more complicated. Thus less interesting, and one/I would
> rather
> duplicate code in apps to get things done.
>
> Blocking latest KF API from usage also means less testing of that before
> the
> initial release.
>
> Besides all the resource costs to create flatpaks on master builds by
> default
> every time, when those are usually not used by anyone anyway.
>
> So, how to solve those problems? Did I miss something?
> Could flatpak builds on master branches be made on-demand rather?
>

For the record, my rebuild of the 6.6-kf6preview Flatpak Runtime/SDK was
successful, and the failure that kicked this off in KUserFeedback has now
been fixed.
https://invent.kde.org/frameworks/kuserfeedback/-/jobs/1561435


> Cheers
> Friedrich
>
>
>
Cheers,
Ben


Re: Flatpak jobs on KDE CI vs. continuous integration on main/master/devel branches

2024-02-04 Thread Ben Cooksley
On Mon, Feb 5, 2024 at 4:28 AM Friedrich W. H. Kossebau 
wrote:

> Hi,
>
> ((cc:kde-frameworks-devel for heads-up, replies please only to
> kde-core-deve))
>
> I hit the problem that when working on a repo which would like to use
> latest
> KF development state to integrate some new KF API just added in
> cooperation
> with that very repo wanting to use it, I cannot do so when someone had
> added a
> flatpak job on CI to that repo.
>
> Because with such flatpak jobs it seems they are limiting the available KF
> version not to the current latest one, as expected for continuous
> integration,
> but some older (anywhere documented?) snapshot:
>
> "runtime-version": "6.6-kf6preview",
>

Please see https://invent.kde.org/packaging/flatpak-kde-runtime/-/tree/kf6
for what is in the KF6 preview.


>
> What can be done here to reestablish the old immediate continuous
> integration
> workflow? Where new APIs (also from KF) are instantly available?
>

With Flatpak new APIs were never instantly available - there has always
been a delay as the Flatpak Runtime uses the most recent released version
of our software.


>
> Right now this is a new extra burden which makes working on new features
> with
> KF and apps more complicated. Thus less interesting, and one/I would
> rather
> duplicate code in apps to get things done.
>
> Blocking latest KF API from usage also means less testing of that before
> the
> initial release.


> Besides all the resource costs to create flatpaks on master builds by
> default
> every time, when those are usually not used by anyone anyway.
>

Those applications that have a hard dependency on features being added to
Frameworks are not good candidates for making use of our Continuous
Delivery systems i'm afraid.
Both Flatpak and Craft based (Linux Appimages, Android APKs, Windows and
macOS) CD jobs are best optimised for those applications that rely on the
stable Frameworks releases.

There are ways (in .craft.ini) to make newer Frameworks available, but that
requires that the system recompiles that Framework each time you trigger a
build and is therefore not recommended.

Allowing those systems to use the "latest" artifacts of Frameworks would be
a non-trivial exercise.


> So, how to solve those problems? Did I miss something?
> Could flatpak builds on master branches be made on-demand rather?


> Cheers
> Friedrich
>

Cheers,
Ben


Re: KDE Frameworks with failing CI (master) (4 February 2024)

2024-02-04 Thread Ben Cooksley
On Mon, Feb 5, 2024 at 1:26 AM Albert Astals Cid  wrote:

> Please work on fixing them, otherwise i will remove the failing CI
> jobs on their 4th failing week, it is very important that CI is passing
> for
> multiple reasons.
>
> Good news: 3 repository has been fixed
>
> Bad news: 2 repositories are still failing, 3 new ones have started failing
>
>
> baloo - 2nd week
>  * https://invent.kde.org/frameworks/baloo/-/pipelines/598254
>   * FreeBSD tests are failing
>
>
> kfilemetadata - 2nd week
>  * https://invent.kde.org/frameworks/kfilemetadata/-/pipelines/598257
>   * FreeBSD tests are failing
>

This one has been debugged by Christoph and there is a pending MR to fix
this.
See https://invent.kde.org/frameworks/kfilemetadata/-/merge_requests/126

Caused by differences in the FreeBSD / Linux APIs and the newer versions of
FreeBSD / ZFS being more strict on the input they're given for attribute
names.
(so yes, there was an actual bug here)


>
> kdav - NEW
>  * https://invent.kde.org/frameworks/kdav/-/pipelines/598256
>   * davcollectionsmultifetchjobtest fails both in Linux and FreeBSD
>
>
> ki18n - NEW
>  * https://invent.kde.org/frameworks/ki18n/-/pipelines/598258
>   * Linux tests are failing
>

The Linux image was recently rebuilt, possibly caused by newer dependencies
coming through.


>
>
> kuserfeedback - NEW
>  * https://invent.kde.org/frameworks/kuserfeedback/-/pipelines/598260
>   * flatpak is failing (the SDK needs updating?)
>

https://invent.kde.org/packaging/flatpak-kde-runtime/-/commit/ddfdb201e65cfebdd33323a6752ff5e5fc475001
https://invent.kde.org/packaging/flatpak-kde-runtime/-/jobs/1560506

Once that has passed, a rebuild of the flatpak-builder CI image will be
needed, then that will be fixed.


>
>
> Cheers,
>   Albert
>

Cheers,
Ben


Re: KDE Frameworks with failing CI (master) (29 January 2024)

2024-02-03 Thread Ben Cooksley
On Sun, Feb 4, 2024 at 5:17 AM  wrote:

> On 2024-02-03 08:57, Ben Cooksley wrote:
> > On Wed, Jan 31, 2024 at 9:25 PM Ben Cooksley 
> > wrote:
> >
> >> On Wed, Jan 31, 2024 at 9:06 AM Volker Krause 
> >> wrote:
> >>
> >>> On Dienstag, 30. Januar 2024 19:08:50 CET Ben Cooksley wrote:
> >>>> On Wed, Jan 31, 2024 at 5:10 AM Volker Krause
> >>>  wrote:
> >>>>> On Dienstag, 30. Januar 2024 09:57:32 CET Ben Cooksley wrote:
> >>>>>> On Tue, Jan 30, 2024 at 8:47 PM Sune Vuorela
> >>>  wrote:
> >>>>>> > On 2024-01-29, Albert Astals Cid  wrote:
> >>>>>> > > Bad news: 6 repositories have started failing
> >>>>>> > >
> >>>>>> > > baloo:
> >>>>>> > > kconfig:
> >>>>>> > > kcontacts
> >>>>>> > > kfilemetadata:
> >>>>>> > > ki18n:
> >>>>>> > >
> >>>>>> > > threadweaver:
> >>>>>> > >   * FreeBSD tests are failing
> >>>>>> >
> >>>>>> > I haven't studied these, and don't know if they are
> >>> frequent or
> >>>>>> > occasional failures. I have seen, after the fbsd builder
> >>> changes, that
> >>>>>> > test execution times have gone up 20-50%. If it is big
> >>> tests that is
> >>>>>> > already close to the limit, then that might be the reason.
> >>>>>> >
> >>>>>> > Or for others with occasional timeout tests on freebsd.
> >>>>>>
> >>>>>> Having a quick look at this, it seems that quite a few of
> >>> those failures
> >>>>>> are i18n related.
> >>>>>> Given we are seeing locale warnings I have a suspicion that
> >>> is the cause
> >>>>>
> >>>>> of
> >>>>>
> >>>>>> many of those failures.
> >>>>>
> >>>>> For ki18n and kcontacts another possible cause could be the
> >>> iso-codes
> >>>>> translation catalogs missing. Are those by any chance packaged
> >>> separately
> >>>>> as
> >>>>> on some Linux distributions?
> >>>>
> >>>> I've checked and we do have iso-codes installed within the
> >>> FreeBSD
> >>>> containers.
> >>>> The files are located at /usr/local/share/iso-codes/ though -
> >>> will our
> >>>> logic find them there?
> >>>
> >>> Yes, the iso-codes data file are found, the tests would show very
> >>> explicit
> >>> error messages and fail in many more places otherwise. We however
> >>> also need
> >>> the corresponding translation catalogs, not just the data files.
> >>> On Linux those
> >>> are in /usr/share/locale/*/LC_MESSAGE/iso_3166*.mo (but often
> >>> separately
> >>> packaged and thus missing).
> >>
> >> Those files are present, although in FreeBSD fashion they are at
> >> /usr/local/share/ instead of /usr/share/:
> >>
> >> /usr/local/share/locale/tr/LC_MESSAGES/iso_3166-1.mo
> >> /usr/local/share/locale/tr/LC_MESSAGES/iso_3166-3.mo
> >> /usr/local/share/locale/tr/LC_MESSAGES/iso_3166-2.mo
> >> /usr/local/share/locale/tr/LC_MESSAGES/iso_3166.mo
> >> /usr/local/share/locale/tr/LC_MESSAGES/iso_3166_2.mo
> >>
> >> Confusingly, and in a way that probably doesn't help software:
> >>
> >> [user@399f8cd87e55 ~]$ ls -lah /usr/share/locale/tr_TR.UTF-8/
> >> total 52
> >> drwxr-xr-x2 root wheel8B Jan 30 10:28 .
> >> drwxr-xr-x  197 root wheel  197B Jan 30 10:28 ..
> >> -r--r--r--1 root wheel   79K Jan 25 15:04 LC_COLLATE
> >> lrwxr-xr-x1 root wheel   19B Jan 25 15:04 LC_CTYPE ->
> >> ../C.UTF-8/LC_CTYPE
> >> -r--r--r--1 root wheel  167B Jan 25 15:04 LC_MESSAGES
> >> -r--r--r--1 root wheel   34B Jan 25 15:04 LC_MONETARY
> >> -r--r--r--1 root wheel6B Jan 25 15:04 LC_NUMERIC
> >> -r--r--r--1 root wheel  374B Jan 25 15:04 LC_TIME
> >
> > The issue was figured out thanks to the work of frinring - who figured
> > out that LC_ALL and LANGUAGE had been set in our FreeBSD containers.
> > That has now been rectified, and the tests in several more Frameworks
> > now pass.
> >
> > (Leaving just Baloo and KFileMetaData as broken I believe)
>
> Hi,
>
> could it be that extended attributes don't work?
>
> I think these tests rely on them.
>

Good suspect, however I test some testing and it seems to work fine:

[user@8a025cda8c7b ~]$ touch file
[user@8a025cda8c7b ~]$ lsextattr user file
file
[user@8a025cda8c7b ~]$ setextattr user test value1 file
[user@8a025cda8c7b ~]$ lsextattr user file
filetest
[user@8a025cda8c7b ~]$ getextattr user test file
filevalue1

The file system in use here is ZFS if it helps anyone.


> Greetings
> Christoph
>

Cheers,
Ben


Re: KDE Frameworks with failing CI (master) (29 January 2024)

2024-02-02 Thread Ben Cooksley
On Wed, Jan 31, 2024 at 9:25 PM Ben Cooksley  wrote:

> On Wed, Jan 31, 2024 at 9:06 AM Volker Krause  wrote:
>
>> On Dienstag, 30. Januar 2024 19:08:50 CET Ben Cooksley wrote:
>> > On Wed, Jan 31, 2024 at 5:10 AM Volker Krause  wrote:
>> > > On Dienstag, 30. Januar 2024 09:57:32 CET Ben Cooksley wrote:
>> > > > On Tue, Jan 30, 2024 at 8:47 PM Sune Vuorela 
>> wrote:
>> > > > > On 2024-01-29, Albert Astals Cid  wrote:
>> > > > > > Bad news: 6 repositories have started failing
>> > > > > >
>> > > > > > baloo:
>> > > > > > kconfig:
>> > > > > > kcontacts
>> > > > > > kfilemetadata:
>> > > > > > ki18n:
>> > > > > >
>> > > > > > threadweaver:
>> > > > > >   * FreeBSD tests are failing
>> > > > >
>> > > > > I haven't studied these, and don't know if they are frequent or
>> > > > > occasional failures. I have seen, after the fbsd builder changes,
>> that
>> > > > > test execution times have gone up 20-50%. If it is big tests that
>> is
>> > > > > already close to the limit, then that might be the reason.
>> > > > >
>> > > > > Or for others with occasional timeout tests on freebsd.
>> > > >
>> > > > Having a quick look at this, it seems that quite a few of those
>> failures
>> > > > are i18n related.
>> > > > Given we are seeing locale warnings I have a suspicion that is the
>> cause
>> > >
>> > > of
>> > >
>> > > > many of those failures.
>> > >
>> > > For ki18n and kcontacts another possible cause could be the iso-codes
>> > > translation catalogs missing. Are those by any chance packaged
>> separately
>> > > as
>> > > on some Linux distributions?
>> >
>> > I've checked and we do have iso-codes installed within the FreeBSD
>> > containers.
>> > The files are located at /usr/local/share/iso-codes/ though - will our
>> > logic find them there?
>>
>> Yes, the iso-codes data file are found, the tests would show very
>> explicit
>> error messages and fail in many more places otherwise. We however also
>> need
>> the corresponding translation catalogs, not just the data files. On Linux
>> those
>> are in /usr/share/locale/*/LC_MESSAGE/iso_3166*.mo (but often separately
>> packaged and thus missing).
>>
>
> Those files are present, although in FreeBSD fashion they are at
> /usr/local/share/ instead of /usr/share/:
>
> /usr/local/share/locale/tr/LC_MESSAGES/iso_3166-1.mo
> /usr/local/share/locale/tr/LC_MESSAGES/iso_3166-3.mo
> /usr/local/share/locale/tr/LC_MESSAGES/iso_3166-2.mo
> /usr/local/share/locale/tr/LC_MESSAGES/iso_3166.mo
> /usr/local/share/locale/tr/LC_MESSAGES/iso_3166_2.mo
>
> Confusingly, and in a way that probably doesn't help software:
>
> [user@399f8cd87e55 ~]$ ls -lah /usr/share/locale/tr_TR.UTF-8/
> total 52
> drwxr-xr-x2 root wheel8B Jan 30 10:28 .
> drwxr-xr-x  197 root wheel  197B Jan 30 10:28 ..
> -r--r--r--1 root wheel   79K Jan 25 15:04 LC_COLLATE
> lrwxr-xr-x1 root wheel   19B Jan 25 15:04 LC_CTYPE ->
> ../C.UTF-8/LC_CTYPE
> -r--r--r--1 root wheel  167B Jan 25 15:04 LC_MESSAGES
> -r--r--r--1 root wheel   34B Jan 25 15:04 LC_MONETARY
> -r--r--r--1 root wheel6B Jan 25 15:04 LC_NUMERIC
> -r--r--r--1 root wheel  374B Jan 25 15:04 LC_TIME
>

The issue was figured out thanks to the work of frinring - who figured out
that LC_ALL and LANGUAGE had been set in our FreeBSD containers.
That has now been rectified, and the tests in several more Frameworks now
pass.

(Leaving just Baloo and KFileMetaData as broken I believe)


>
>
>>
>> Regards,
>> Volker
>
>
> Cheers,
> Ben
>

Cheers,
Ben


Re: KDE Frameworks with failing CI (master) (29 January 2024)

2024-01-31 Thread Ben Cooksley
On Wed, Jan 31, 2024 at 9:06 AM Volker Krause  wrote:

> On Dienstag, 30. Januar 2024 19:08:50 CET Ben Cooksley wrote:
> > On Wed, Jan 31, 2024 at 5:10 AM Volker Krause  wrote:
> > > On Dienstag, 30. Januar 2024 09:57:32 CET Ben Cooksley wrote:
> > > > On Tue, Jan 30, 2024 at 8:47 PM Sune Vuorela 
> wrote:
> > > > > On 2024-01-29, Albert Astals Cid  wrote:
> > > > > > Bad news: 6 repositories have started failing
> > > > > >
> > > > > > baloo:
> > > > > > kconfig:
> > > > > > kcontacts
> > > > > > kfilemetadata:
> > > > > > ki18n:
> > > > > >
> > > > > > threadweaver:
> > > > > >   * FreeBSD tests are failing
> > > > >
> > > > > I haven't studied these, and don't know if they are frequent or
> > > > > occasional failures. I have seen, after the fbsd builder changes,
> that
> > > > > test execution times have gone up 20-50%. If it is big tests that
> is
> > > > > already close to the limit, then that might be the reason.
> > > > >
> > > > > Or for others with occasional timeout tests on freebsd.
> > > >
> > > > Having a quick look at this, it seems that quite a few of those
> failures
> > > > are i18n related.
> > > > Given we are seeing locale warnings I have a suspicion that is the
> cause
> > >
> > > of
> > >
> > > > many of those failures.
> > >
> > > For ki18n and kcontacts another possible cause could be the iso-codes
> > > translation catalogs missing. Are those by any chance packaged
> separately
> > > as
> > > on some Linux distributions?
> >
> > I've checked and we do have iso-codes installed within the FreeBSD
> > containers.
> > The files are located at /usr/local/share/iso-codes/ though - will our
> > logic find them there?
>
> Yes, the iso-codes data file are found, the tests would show very explicit
> error messages and fail in many more places otherwise. We however also
> need
> the corresponding translation catalogs, not just the data files. On Linux
> those
> are in /usr/share/locale/*/LC_MESSAGE/iso_3166*.mo (but often separately
> packaged and thus missing).
>

Those files are present, although in FreeBSD fashion they are at
/usr/local/share/ instead of /usr/share/:

/usr/local/share/locale/tr/LC_MESSAGES/iso_3166-1.mo
/usr/local/share/locale/tr/LC_MESSAGES/iso_3166-3.mo
/usr/local/share/locale/tr/LC_MESSAGES/iso_3166-2.mo
/usr/local/share/locale/tr/LC_MESSAGES/iso_3166.mo
/usr/local/share/locale/tr/LC_MESSAGES/iso_3166_2.mo

Confusingly, and in a way that probably doesn't help software:

[user@399f8cd87e55 ~]$ ls -lah /usr/share/locale/tr_TR.UTF-8/
total 52
drwxr-xr-x2 root wheel8B Jan 30 10:28 .
drwxr-xr-x  197 root wheel  197B Jan 30 10:28 ..
-r--r--r--1 root wheel   79K Jan 25 15:04 LC_COLLATE
lrwxr-xr-x1 root wheel   19B Jan 25 15:04 LC_CTYPE ->
../C.UTF-8/LC_CTYPE
-r--r--r--1 root wheel  167B Jan 25 15:04 LC_MESSAGES
-r--r--r--1 root wheel   34B Jan 25 15:04 LC_MONETARY
-r--r--r--1 root wheel6B Jan 25 15:04 LC_NUMERIC
-r--r--r--1 root wheel  374B Jan 25 15:04 LC_TIME


>
> Regards,
> Volker


Cheers,
Ben


Re: KDE Frameworks with failing CI (master) (29 January 2024)

2024-01-30 Thread Ben Cooksley
On Wed, Jan 31, 2024 at 5:10 AM Volker Krause  wrote:

> On Dienstag, 30. Januar 2024 09:57:32 CET Ben Cooksley wrote:
> > On Tue, Jan 30, 2024 at 8:47 PM Sune Vuorela  wrote:
> > > On 2024-01-29, Albert Astals Cid  wrote:
> > > > Bad news: 6 repositories have started failing
> > > >
> > > > baloo:
> > > > kconfig:
> > > > kcontacts
> > > > kfilemetadata:
> > > > ki18n:
> > > >
> > > > threadweaver:
> > > >   * FreeBSD tests are failing
> > >
> > > I haven't studied these, and don't know if they are frequent or
> > > occasional failures. I have seen, after the fbsd builder changes, that
> > > test execution times have gone up 20-50%. If it is big tests that is
> > > already close to the limit, then that might be the reason.
> > >
> > > Or for others with occasional timeout tests on freebsd.
> >
> > Having a quick look at this, it seems that quite a few of those failures
> > are i18n related.
> > Given we are seeing locale warnings I have a suspicion that is the cause
> of
> > many of those failures.
>
> For ki18n and kcontacts another possible cause could be the iso-codes
> translation catalogs missing. Are those by any chance packaged separately
> as
> on some Linux distributions?
>

I've checked and we do have iso-codes installed within the FreeBSD
containers.
The files are located at /usr/local/share/iso-codes/ though - will our
logic find them there?

Following the installation of locales, i'm happy to report that kconfig and
threadweaver are fixed, so that is one part of the puzzle at least.


>
> Regards,
> Volker
>

Cheers,
Ben


Re: KDE Frameworks with failing CI (kf5) (29 January 2024)

2024-01-30 Thread Ben Cooksley
On Tue, Jan 30, 2024 at 11:59 AM Albert Astals Cid  wrote:

> Please work on fixing them, otherwise i will remove the failing CI jobs on
> their 4th failing week, it is very important that CI is passing for
> multiple
> reasons.
>
> Bad news: 11 repositories started failing
>
>
> baloo:
>  * https://invent.kde.org/frameworks/baloo/-/pipelines/593597
>   * Tests fail on FreeBSD
>

Looks like the same failure we see on master, where the file watcher
appears to be asked to watch an invalid path.
Do we know if the correct path is being passed in here?


>
>
> kfilemetadata:
>  * https://invent.kde.org/frameworks/kfilemetadata/-/pipelines/593602
>   * Tests fail on FreeBSD


>
> kwidgetaddons:
>  * https://invent.kde.org/frameworks/kwidgetsaddons/-/pipelines/593612
>   * Windows static fails to compile
>
>
> kemoticons:
>  * https://invent.kde.org/frameworks/kemoticons/-/pipelines/593601
>   * Fails because of ecm_feature_summary
>
>
> kdelibs4support:
>  * https://invent.kde.org/frameworks/kdelibs4support/-/pipelines/593599
>   * Fails because of ecm_feature_summary
>
>
> khtml:
>  * https://invent.kde.org/frameworks/khtml/-/pipelines/593603
>   * Fails because of ecm_feature_summary
>
>
> kjs:
>  * https://invent.kde.org/frameworks/kjs/-/pipelines/593606
>   * Fails because of ecm_feature_summary
>
>
> kjsembed:
>  * https://invent.kde.org/frameworks/kjsembed/-/pipelines/593607
>   * Fails because of ecm_feature_summary
>
>
> kmediaplater:
>  * https://invent.kde.org/frameworks/kmediaplayer/-/pipelines/593608
>   * Fails because of ecm_feature_summary
>
>
> kross:
>  * https://invent.kde.org/frameworks/kross/-/pipelines/593610
>   * Fails because of ecm_feature_summary
>
>
> kxmlrpcclient:
>  * https://invent.kde.org/frameworks/kxmlrpcclient/-/pipelines/593613
>   * Fails because of ecm_feature_summary
>
>
> Cheers,
>   Albert
>

Cheers,
Ben


Re: KDE Frameworks with failing CI (master) (29 January 2024)

2024-01-30 Thread Ben Cooksley
On Tue, Jan 30, 2024 at 8:47 PM Sune Vuorela  wrote:

> On 2024-01-29, Albert Astals Cid  wrote:
> > Bad news: 6 repositories have started failing
> >
> > baloo:
> > kconfig:
> > kcontacts
> > kfilemetadata:
> > ki18n:
> > threadweaver:
> >   * FreeBSD tests are failing
>
> I haven't studied these, and don't know if they are frequent or
> occasional failures. I have seen, after the fbsd builder changes, that
> test execution times have gone up 20-50%. If it is big tests that is
> already close to the limit, then that might be the reason.
>
> Or for others with occasional timeout tests on freebsd.
>

Having a quick look at this, it seems that quite a few of those failures
are i18n related.
Given we are seeing locale warnings I have a suspicion that is the cause of
many of those failures.

Tobias, any ideas here?


>
> /Sune
>
>
Cheers,
Ben


Re: Major CI changes - FreeBSD and Linux

2024-01-22 Thread Ben Cooksley
On Mon, Jan 22, 2024 at 10:08 PM Ben Cooksley  wrote:

> Hi all,
>
> Over the past few weeks significant work has been undertaken to develop
> the ability to make use of containerised builds for FreeBSD.
>
> Over the weekend i'm happy to report that this has now been rolled out and
> is now in use across all 5 CI workers that support invent.kde.org. This
> means going forward that we should no longer experience issues with running
> out of disk space on our FreeBSD CI jobs anymore, and we will have the
> ability to ensure others can easily reproduce our setup on their local
> system.
>
> The FreeBSD images for Qt 5.15 and Qt 6.6 that are in use can be found at
> https://invent.kde.org/sysadmin/ci-images along with the other images we
> publish. For those curious about how to set up their own builder,
> instructions can be found in the gitlab-templates/ folder of
> sysadmin/ci-utilities (instructions are also present there for Linux and
> Windows).
>
> Alongside this, we've also transitioned from using Docker on the Linux
> side of the CI workers to using rootless (unprivileged) Podman containers.
> This change was necessitated by changes to Bubblewrap, which is the
> underlying container technology used by flatpak-builder, that made it
> incompatible with the workarounds we previously had in place to run it
> under Docker.
>
> For most projects, this should not pose any issues however due to a last
> minute issue that was discovered during the rollout the DRM virtual gem
> devices while present won't be accessible. To my knowledge this only
> impacts KWin.
> It is possible other projects that were doing actions in their tests that
> need some form of privilege (such as invoking a debugger) may also be
> affected, however in theory there should not be much difference between the
> two container implementations.
>
> This change has also come along with a switch to Debian Bookworm (and the
> 6.1.0 kernel that comes with it) which depending on your tests could also
> have an impact.
>
> The underlying operating system in use within our Linux CI images is not
> changed and continues to be OpenSUSE for desktop Linux, and Ubuntu 22.04
> for Android mobile builds.
>
> At this time the setup for building of Linux images has yet to be adapted,
> so that capability is temporarily unavailable. It is expected to be
> restored in the coming days.
> Until that is completed, we will be unable to rebuild any of our Linux
> images.
>

This has now been corrected and should be functional again.


>
> Thanks,
> Ben
>

Cheers,
Ben


Major CI changes - FreeBSD and Linux

2024-01-22 Thread Ben Cooksley
Hi all,

Over the past few weeks significant work has been undertaken to develop the
ability to make use of containerised builds for FreeBSD.

Over the weekend i'm happy to report that this has now been rolled out and
is now in use across all 5 CI workers that support invent.kde.org. This
means going forward that we should no longer experience issues with running
out of disk space on our FreeBSD CI jobs anymore, and we will have the
ability to ensure others can easily reproduce our setup on their local
system.

The FreeBSD images for Qt 5.15 and Qt 6.6 that are in use can be found at
https://invent.kde.org/sysadmin/ci-images along with the other images we
publish. For those curious about how to set up their own builder,
instructions can be found in the gitlab-templates/ folder of
sysadmin/ci-utilities (instructions are also present there for Linux and
Windows).

Alongside this, we've also transitioned from using Docker on the Linux side
of the CI workers to using rootless (unprivileged) Podman containers. This
change was necessitated by changes to Bubblewrap, which is the underlying
container technology used by flatpak-builder, that made it incompatible
with the workarounds we previously had in place to run it under Docker.

For most projects, this should not pose any issues however due to a last
minute issue that was discovered during the rollout the DRM virtual gem
devices while present won't be accessible. To my knowledge this only
impacts KWin.
It is possible other projects that were doing actions in their tests that
need some form of privilege (such as invoking a debugger) may also be
affected, however in theory there should not be much difference between the
two container implementations.

This change has also come along with a switch to Debian Bookworm (and the
6.1.0 kernel that comes with it) which depending on your tests could also
have an impact.

The underlying operating system in use within our Linux CI images is not
changed and continues to be OpenSUSE for desktop Linux, and Ubuntu 22.04
for Android mobile builds.

At this time the setup for building of Linux images has yet to be adapted,
so that capability is temporarily unavailable. It is expected to be
restored in the coming days.
Until that is completed, we will be unable to rebuild any of our Linux
images.

Thanks,
Ben


Re: Transitioning to Qt 6.6 for Windows builds - Syndication build failure

2023-11-21 Thread Ben Cooksley
On Tue, Nov 21, 2023 at 6:29 AM  wrote:

> On 2023-11-19 09:58, Ben Cooksley wrote:
> > Hi all,
> >
> > As you'll be aware, we've been working on moving CI over to Qt 6.6 for
> > a little while now, and as part of this have hit a bit of a roadblock
> > with the Framework Syndication.
> >
> > The roadblock is more likely to be due to the transition to using MSVC
> > 2022 (and all the compiler changes that come with that) however that
> > change was mandated by Qt 6.6 itself so there isn't much we can do
> > about that.
> >
> > The build failure can be seen at
> > https://invent.kde.org/sysadmin/ci-management/-/jobs/1373146 and a
> > draft fix for the issue can be found at
> > https://invent.kde.org/frameworks/syndication/-/merge_requests/26
> >
> > Any assistance with getting Syndication fixed up would be very much
> > appreciated so we can get Windows builds moved to Qt 6.6 (which will
> > leave just FreeBSD on Qt 6.5)
>
> Hi, that seems to be fixed now with the last merge, I rescheduled the
> job,
> it already passed at least the syndication step.
>

Thanks Christoph, following that enough of the other seed jobs passed so I
moved us over to Qt 6.6 for Windows CI.


>
> Greetings
> Christoph
>

Cheers,
Ben


>
> >
> > Thanks,
> > Ben
>


Gitlab update - CI future proofing required

2023-11-19 Thread Ben Cooksley
Hi all,

Over this weekend I completed a series of updates to invent.kde.org, moving
it to the latest supported version of Postgres (14) and Gitlab (16.6).

As part of that Gitlab update, additional security policies began to be
enforced by Gitlab which mean our existing method of including CI templates
is now beginning to be problematic.

To correct this, we need to port our .gitlab-ci.yml files over to the
include:project syntax (see
https://docs.gitlab.com/ee/ci/yaml/#includeproject)

As an example, this might look like for a Qt 6 only project with Linux,
FreeBSD and Windows builds:

include:
  - project: sysadmin/ci-utilities
file:
  - /gitlab-templates/linux-qt6.yml
  - /gitlab-templates/freebsd-qt6.yml
  - /gitlab-templates/windows-qt6.yml

While we've been able to permit the existing syntax to work for now, it is
recommended that projects please look into porting their CI configurations
now to avoid future issues.

Thanks,
Ben


Transitioning to Qt 6.6 for Windows builds - Syndication build failure

2023-11-19 Thread Ben Cooksley
Hi all,

As you'll be aware, we've been working on moving CI over to Qt 6.6 for a
little while now, and as part of this have hit a bit of a roadblock with
the Framework Syndication.

The roadblock is more likely to be due to the transition to using MSVC 2022
(and all the compiler changes that come with that) however that change was
mandated by Qt 6.6 itself so there isn't much we can do about that.

The build failure can be seen at
https://invent.kde.org/sysadmin/ci-management/-/jobs/1373146 and a draft
fix for the issue can be found at
https://invent.kde.org/frameworks/syndication/-/merge_requests/26

Any assistance with getting Syndication fixed up would be very much
appreciated so we can get Windows builds moved to Qt 6.6 (which will leave
just FreeBSD on Qt 6.5)

Thanks,
Ben


Re: plasma-framework

2023-11-07 Thread Ben Cooksley
On Wed, Nov 8, 2023 at 12:22 AM Jonathan Esk-Riddell 
wrote:

> On Sun, Nov 05, 2023 at 12:59:28PM +0100, Friedrich W. H. Kossebau wrote:
> > kactivities and kactivities-stats: please consider proper de-KF-ication
> now
> >
> > Hi,
> >
> > with plasma-framework, kactivities and kactivities entering the Plasma
> product
> > bundle, I assume they also will adapt to Plasma versioning.
>
> We've done the reversioning now (thanks to those who tidied up after
> me yesterday).  We plan to rename plasma-framework to libplasma
> although I'm not sure who has the energy to do it.  I suppose we'll
> also remove the KF terms from the other ones too.
>

Are you talking about renaming the CMake/Binary side or the
repository/tarball side here?


>
> kwayland is the 4th one being moved, it has been re-versioned but not yet
> moved in invent.
>
> Jonathan
>

Cheers,
Ben


Re: libkexiv2, libkdcraw (Re: Collection of packaging notes)

2023-11-03 Thread Ben Cooksley
On Fri, Nov 3, 2023 at 12:57 PM Albert Astals Cid  wrote:

> El dimecres, 1 de novembre de 2023, a les 13:25:42 (CET), Friedrich W. H.
> Kossebau va escriure:
> > Am Mittwoch, 1. November 2023, 11:55:08 CET schrieb Christophe Marin:
> > > With various alpha coming out soon, here are the notes added to my
> > > packages
> > > when I started packaging snapshots and still present.
> >
> > Thanks for the report.
> >
> > Everyone:
> > Could we perhaps establish some wiki page where such things could be
> > tracked?
>
>
> I don't particularly think wiki pages are good for tracking issues, we
> have
> issue trackers for that ;)
>
> I've proposed elsewhere to re-use the release-service issue tracker, but
> honestly I've no idea if anyone can create issues here
>   https://invent.kde.org/teams/release-service/issues/-/issues/
> or only team members, if it's only team members, it's not a great place to
> put
> things i guess unless we add lots of folks to the team (which i'm not
> against
> but they may be).
>

I have now created
https://invent.kde.org/teams/release-service/qt6-mega-release to help keep
the issues here separate from the ones you've been using to track and
manage the Gear releases.

Issues can be created by anyone, but certain actions on those issues - such
as moving them around the board, labelling them, etc. do require membership
of the group at the reporter or developer level.
See https://docs.gitlab.com/ee/user/permissions.html for more information
on this.

Cheers,
Ben


>
> Cheers,
>   Albert
>
> > > - Non frameworks modules installing libKF*.so
> > > libkexiv2 (libKF6KExiv2.so)
> >
> > Any code ideas for naming it, given there is already a number suffix,
> coming
> > from the library that is wrapped?
> >
> > Similar need also for libkomparediff2, where the 2 is referring to
> diffing 2
> > things, not a version number.
> >
> > > libkdcraw (libKF6KDcraw.so)
> >
> > I have an old patch locally somehow never finished, will brush up as MR
> > tonight hopefully. (promised in
> https://invent.kde.org/graphics/libkdcraw/-/
> > merge_requests/9#note_646025 )
> >
> > Cheers
> > Frriedrich
>
>
>
>
>


Re: Frameworks 6 alpha

2023-11-01 Thread Ben Cooksley
On Wed, Nov 1, 2023 at 8:42 AM Jonathan Riddell  wrote:

> We chatted about the alpha release due next Wednesday in the Frameworks
> meeting today.
>
> From my notes:
>
> - Frameworks would like do be part of this release
>
> - Nico F, Alex S, David E are release spods
>
> - There's not been any work on doing the tooling.  There's a desire to
> move to releaseme for tooling.  Jonathan uses this plus a load of
> supporting scripts for Plasma and will look at adapting that for Frameworks
> and come up with a proposal
>
> - Plasma would like to take over release of plasma-framework, kwayland and
> kactivities and that presumably means also kactivities-stats.  There was
> discussion on the problem of moving the gitlab entries for this while 5
> releases are still ongoing so it's probably best to just leave that for now.
>

Having the repositories being subject to two different sets of rules is
going to be quite painful - as quite a few systems assume that something in
frameworks/* is a Framework.
This is the case for CI (in the branch-rules, the seed jobs, etc), API
Documentation and a few other places.

It is much easier to make something non-Frameworks look like a Framework,
than it is to make something that is classified as Frameworks (but isn't
actually one) not be seen as one.
(ie. the rules we have are inclusionary not exclusionary)

I'd be in favour of saying they need to be moved from frameworks/ to
plasma/ in preparation for this.


>
> - oxygen-icons5 tar should be renamed oxygen-icons (again probably leave
> gitlab repo renaming until later)
>
- We didn't discuss it but kirigami2 tar should also be renamed to kirigami
>

Not sure why we need to delay renaming the repository?


>
> Does that seem right?
>
> Anything else?
>
> Jonathan
>

Thanks,
Ben


[sysadmin/ci-utilities] gitlab-templates: Move Linux CI for Qt 6 over to Qt 6.6.

2023-10-31 Thread Ben Cooksley
Git commit 55f8993e028b2597dea44077cd49eb91bb9d87e4 by Ben Cooksley.
Committed on 31/10/2023 at 10:23.
Pushed by bcooksley into branch 'master'.

Move Linux CI for Qt 6 over to Qt 6.6.

CCMAIL: kde-de...@kde.org
CCMAIL: kde-core-de...@kde.org
CCMAIL: kde-frameworks-devel@kde.org
CCMAIL: plasma-de...@kde.org

M  +5-5gitlab-templates/linux-qt6-static.yml
M  +5-5gitlab-templates/linux-qt6.yml

https://invent.kde.org/sysadmin/ci-utilities/-/commit/55f8993e028b2597dea44077cd49eb91bb9d87e4

diff --git a/gitlab-templates/linux-qt6-static.yml 
b/gitlab-templates/linux-qt6-static.yml
index 3e1f3fb..852f0b6 100644
--- a/gitlab-templates/linux-qt6-static.yml
+++ b/gitlab-templates/linux-qt6-static.yml
@@ -1,13 +1,13 @@
-suse_tumbleweed_qt65_static:
+suse_tumbleweed_qt66_static:
   stage: build
-  image: invent-registry.kde.org/sysadmin/ci-images/suse-qt65:latest
+  image: invent-registry.kde.org/sysadmin/ci-images/suse-qt66:latest
   tags:
 - Linux
   variables:
-KDECI_CC_CACHE: /mnt/caches/suse-qt6.5-static/
-KDECI_CACHE_PATH: /mnt/artifacts/suse-qt6.5-static/
+KDECI_CC_CACHE: /mnt/caches/suse-qt6.6-static/
+KDECI_CACHE_PATH: /mnt/artifacts/suse-qt6.6-static/
 KDECI_GITLAB_SERVER: https://invent.kde.org/
-KDECI_PACKAGE_PROJECT: teams/ci-artifacts/suse-qt6.5-static
+KDECI_PACKAGE_PROJECT: teams/ci-artifacts/suse-qt6.6-static
   interruptible: true
   before_script:
 - git clone https://invent.kde.org/sysadmin/ci-utilities.git --depth=1
diff --git a/gitlab-templates/linux-qt6.yml b/gitlab-templates/linux-qt6.yml
index 71e5c03..5f0ef50 100644
--- a/gitlab-templates/linux-qt6.yml
+++ b/gitlab-templates/linux-qt6.yml
@@ -1,13 +1,13 @@
-suse_tumbleweed_qt65:
+suse_tumbleweed_qt66:
   stage: build
-  image: invent-registry.kde.org/sysadmin/ci-images/suse-qt65:latest
+  image: invent-registry.kde.org/sysadmin/ci-images/suse-qt66:latest
   tags:
 - Linux
   variables:
-KDECI_CC_CACHE: /mnt/caches/suse-qt6.5/
-KDECI_CACHE_PATH: /mnt/artifacts/suse-qt6.5/
+KDECI_CC_CACHE: /mnt/caches/suse-qt6.6/
+KDECI_CACHE_PATH: /mnt/artifacts/suse-qt6.6/
 KDECI_GITLAB_SERVER: https://invent.kde.org/
-KDECI_PACKAGE_PROJECT: teams/ci-artifacts/suse-qt6.5
+KDECI_PACKAGE_PROJECT: teams/ci-artifacts/suse-qt6.6
   interruptible: true
   before_script:
 - git clone https://invent.kde.org/sysadmin/ci-utilities.git --depth=1


General Availability - Updated Gitlab Runners

2023-09-09 Thread Ben Cooksley
Hi all,

Today we deployed replacements to node3, node4 and node5 - which were the
remaining old workers attached to Invent.

This means that all workers have now completed being updated to a more
modern host operating system (Ubuntu 22.04) as well as newer generation
hardware (with the CPU now being a Ryzen 7700 compared to the previous
Ryzen 3700X).

Windows builds have also been moved over, with the only change there being
a reduction from 24GB RAM to 16GB RAM being allocated to Windows on those
three nodes. Windows Server 2022 Datacenter Edition continues to be used as
the host OS for those builds.

The legacy builders are still currently performing FreeBSD builds, however
I will provision those FreeBSD VMs shortly so they should transition across
soon as well.

Please let us know if there are any issues.

Cheers,
Ben


New CI workers - node1 and node2

2023-08-12 Thread Ben Cooksley
Hi all,

Over the last 2 days i've been busy connecting two new CI workers to
GitLab, which are the beginning of long overdue improvements to our CI
arrangements needed to support the final retirement of the Binary Factory.

While developers shouldn't notice much in the way of changes, this will
bring us a series of long term benefits which include:
- The builders host OS being significantly more up to date, which will
allow certain GPU related tests to be run again
- The ability to build Webengine on the CI system (for Craft caches
primarily) which will support our builds of Linux appimages
- Reduced time to wait for a build to start as more build resources will be
available to Gitlab (while we've had 5 workers for a long time, 2 were
connected to the Binary Factory and were unavailable to Gitlab)

On the Linux and Windows side, the two new builders - node1 and node2 - are
already available and should be carrying out builds already.
I'm still waiting on some details for FreeBSD, but once that has been
sorted we should be able to provision those as well.

For those curious, these two builders (node1 and node2) are equipped with
Ryzen 7 7700 CPUs, 128GB RAM and 1TB of Gen4 NVMe storage, although not all
of this is available to Linux builds as it is apportioned partly to Windows
and FreeBSD VMs.

Once these two machines are in full service, two of the current nodes
(node3 and node4) will be retired to allow them to be replaced with newer
machines as the Binary Factory shutdown approaches (watch this space).

Thanks,
Ben


Re: ACTION REQUIRED - Gitlab and Subversion server migration

2023-07-25 Thread Ben Cooksley
On Tue, Jul 25, 2023 at 1:35 AM Vít Pelčák  wrote:

>
> ne 23. 7. 2023 v 12:01 odesílatel Ben Cooksley  napsal:
>
>> Good morning KDE Developers,
>>
>> As many of you will be aware, today Gitlab and our Subversion repository
>> were both migrated to a new home - on a more modern and more powerful
>> server, which should better support future work.
>>
>> As a consequence the host key of the server has now changed, which means
>> you will need to take steps on your system otherwise you won't be allowed
>> to connect to the new server.
>>
>> Please ensure you run the following two commands to clear out any
>> existing host keys:
>> - ssh-keygen -R invent.kde.org -f ~/.ssh/known_hosts
>> - ssh-keygen -R svn.kde.org ~/.ssh/known_hosts
>>
>
> I suppose you meant
> ssh-keygen -R svn.kde.org -f ~/.ssh/known_hosts
>

That is correct.


>
> right?
>
>
Cheers,
Ben


ACTION REQUIRED - Gitlab and Subversion server migration

2023-07-23 Thread Ben Cooksley
Good morning KDE Developers,

As many of you will be aware, today Gitlab and our Subversion repository
were both migrated to a new home - on a more modern and more powerful
server, which should better support future work.

As a consequence the host key of the server has now changed, which means
you will need to take steps on your system otherwise you won't be allowed
to connect to the new server.

Please ensure you run the following two commands to clear out any existing
host keys:
- ssh-keygen -R invent.kde.org -f ~/.ssh/known_hosts
- ssh-keygen -R svn.kde.org ~/.ssh/known_hosts

Following these commands the next time you try to connect you will be
prompted to confirm the new host key and trust it for use. For those who
would like to confirm that host key, it is as follows:

256 SHA256:zHdK2R/S6s5Oj71N0s8LHWCXXsUt+DCztd+GjzW9KlU root@lerwini
(ED25519)
256 SHA256:ZNBg4AkRxbt/N6xzpt7GbmmS78A3WFy5lz0l/cPHbcE root@lerwini (ECDSA)
3072 SHA256:KxAoV6VsbKvAocFZCJlxtmPDScmUCRNiUiOCSXNSC/k root@lerwini (RSA)

Please let us know, via either sysad...@kde.org or kde-de...@kde.org if you
encounter any issues with the new system.

Many thanks,
Ben Cooksley
KDE Sysadmin


T11542: Remove KHTML

2023-06-24 Thread Ben Cooksley
bcooksley removed a parent task: T16578: fuck.

TASK DETAIL
  https://phabricator.kde.org/T11542

To: bcooksley
Cc: cordlandwehr, ngraham, #konqueror, #plasma, #okular, #kde_applications, 
#frameworks, knauss, ghost62, hannahk, davidre, GB_2, ahmadsamir, kpiwowarski, 
usta, asturmlechner, jucato, krop, cullmann, vkrause


Re: kio-extras and the KF5/KF6 period

2023-06-21 Thread Ben Cooksley
On Wed, Jun 21, 2023 at 8:22 PM Harald Sitter  wrote:

> On Tue, Jun 20, 2023 at 11:23 PM Sune Vuorela  wrote:
> >
> > On 2023-06-20, David Redondo  wrote:
> > > Harald and I prototyped another solution to build a Qt
> > >  5 and Qt 6 version out of the same repo and employed it on
> > > plasma-integration:
> https://invent.kde.org/plasma/plasma-integration/-/
> > > merge_requests/91
> >
> > Did I miss something or is this just branching but without having git to
> > help us move stuff between versions?
>
> Yes but no. If we have two branches we also need two tars and everyone
> needs to do two things. If we have everything in the same branch then
> everything is as usual. Also, the sources are so divergent that
> picking is bound to be annoying.
>

It sounds like we are approaching (or have already hit) a "crossroads"
where it is time for the Qt 5 codebase to enter a stable release only
phase, with master becoming Qt 6 exclusive.
Splitting the code into separate Qt 5 / Qt 6 folders sounds like it will be
difficult to ensure that all bug fixes are applied equally to both sides.


>
> HS
>

Cheers,
Ben


[sysadmin/ci-utilities] /: Mark kdewebkit for removal from all CI package archives.

2023-04-22 Thread Ben Cooksley
Git commit a26ee80792b83e7937b375571d937d40d5174cfc by Ben Cooksley.
Committed on 22/04/2023 at 20:01.
Pushed by bcooksley into branch 'master'.

Mark kdewebkit for removal from all CI package archives.

CCMAIL: kde-frameworks-devel@kde.org

M  +12   -0package-registry-cleanup.py

https://invent.kde.org/sysadmin/ci-utilities/commit/a26ee80792b83e7937b375571d937d40d5174cfc

diff --git a/package-registry-cleanup.py b/package-registry-cleanup.py
index a4afeec..cffa44d 100644
--- a/package-registry-cleanup.py
+++ b/package-registry-cleanup.py
@@ -50,6 +50,12 @@ projectsWithQt6OnlyMaster = [
 'kxmlgui', 'kxmlrpcclient', 'modemmanager-qt', 'networkmanager-qt', 
'oxygen-icons5', 'plasma-framework', 'prison', 'purpose', 'qqc2-desktop-style',
 'solid', 'sonnet', 'syndication', 'syntax-highlighting', 'threadweaver',
 ]
+
+# Configuration - list of projects to always remove
+projectsToAlwaysRemove = [
+# QtWebKit is no longer supported
+'kdewebkit',
+]
 
 # Now that we have that setup, let's find out what packages our Gitlab package 
project knows about
 for package in remoteRegistry.packages.list( as_list=False ):
@@ -68,6 +74,12 @@ for package in remoteRegistry.packages.list( as_list=False ):
 'timestamp': int(timestamp)
 }
 
+# Is this a project we should always be removing?
+if package.name in projectsToAlwaysRemove:
+# Then remove it
+packagesToRemove.append( packageData['package'] )
+continue
+
 # Is this a stale branch we can let go of?
 if branch in ['release-21.08', 'release-21.12', 'release-22.04', 
'release-22.08', 'Plasma-5.24', 'Plasma-5.25', 'Plasma-5.26']:
 # Then mark it for removal


[frameworks/kdewebkit/kf5] /: Remove CI support for kdewebkit.

2023-04-22 Thread Ben Cooksley
Git commit 1c67c1a6b8d7c5abdd5c7ee52a3db012f5be3d96 by Ben Cooksley.
Committed on 22/04/2023 at 19:57.
Pushed by bcooksley into branch 'kf5'.

Remove CI support for kdewebkit.

QtWebKit is now considered unsupported within a CI context, which means 
kdewebkit in turn is no longer supported as well.

CCMAIL: kde-frameworks-devel@kde.org
(cherry picked from commit 6fc5a821235a11491c734aa10055e49a1f7c46e0)

D  +0-6.gitlab-ci.yml
D  +0-12   .kde-ci.yml

https://invent.kde.org/frameworks/kdewebkit/commit/1c67c1a6b8d7c5abdd5c7ee52a3db012f5be3d96

diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
deleted file mode 100644
index a607d28..000
--- a/.gitlab-ci.yml
+++ /dev/null
@@ -1,6 +0,0 @@
-# SPDX-FileCopyrightText: 2020 Volker Krause 
-# SPDX-License-Identifier: CC0-1.0
-
-include:
-  - 
https://invent.kde.org/sysadmin/ci-utilities/raw/master/gitlab-templates/linux.yml
-  - 
https://invent.kde.org/sysadmin/ci-utilities/raw/master/gitlab-templates/freebsd.yml
diff --git a/.kde-ci.yml b/.kde-ci.yml
deleted file mode 100644
index 0c8b92b..000
--- a/.kde-ci.yml
+++ /dev/null
@@ -1,12 +0,0 @@
-Dependencies:
-- 'on': ['Linux', 'FreeBSD', 'Windows', 'macOS']
-  'require':
-'frameworks/extra-cmake-modules': '@same'
-'frameworks/kcoreaddons' : '@same'
-'frameworks/kwallet' : '@same'
-'frameworks/kio' : '@same'
-'frameworks/knotifications' : '@same'
-'frameworks/kparts' : '@same'
-
-Options:
-  test-before-installing: True


[frameworks/kdewebkit] /: Remove CI support for kdewebkit.

2023-04-22 Thread Ben Cooksley
Git commit 6fc5a821235a11491c734aa10055e49a1f7c46e0 by Ben Cooksley.
Committed on 22/04/2023 at 19:57.
Pushed by bcooksley into branch 'master'.

Remove CI support for kdewebkit.

QtWebKit is now considered unsupported within a CI context, which means 
kdewebkit in turn is no longer supported as well.

CCMAIL: kde-frameworks-devel@kde.org

D  +0-6.gitlab-ci.yml
D  +0-12   .kde-ci.yml

https://invent.kde.org/frameworks/kdewebkit/commit/6fc5a821235a11491c734aa10055e49a1f7c46e0

diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
deleted file mode 100644
index a607d28..000
--- a/.gitlab-ci.yml
+++ /dev/null
@@ -1,6 +0,0 @@
-# SPDX-FileCopyrightText: 2020 Volker Krause 
-# SPDX-License-Identifier: CC0-1.0
-
-include:
-  - 
https://invent.kde.org/sysadmin/ci-utilities/raw/master/gitlab-templates/linux.yml
-  - 
https://invent.kde.org/sysadmin/ci-utilities/raw/master/gitlab-templates/freebsd.yml
diff --git a/.kde-ci.yml b/.kde-ci.yml
deleted file mode 100644
index 0c8b92b..000
--- a/.kde-ci.yml
+++ /dev/null
@@ -1,12 +0,0 @@
-Dependencies:
-- 'on': ['Linux', 'FreeBSD', 'Windows', 'macOS']
-  'require':
-'frameworks/extra-cmake-modules': '@same'
-'frameworks/kcoreaddons' : '@same'
-'frameworks/kwallet' : '@same'
-'frameworks/kio' : '@same'
-'frameworks/knotifications' : '@same'
-'frameworks/kparts' : '@same'
-
-Options:
-  test-before-installing: True


[sysadmin/ci-management] latest: Remove kdewebkit from the CI job seeds.

2023-04-22 Thread Ben Cooksley
Git commit 41002ce5f78d403f001b269153410c5910f529f2 by Ben Cooksley.
Committed on 22/04/2023 at 19:55.
Pushed by bcooksley into branch 'master'.

Remove kdewebkit from the CI job seeds.
Due to the unmaintained status (and removal) of QtWebKit with openSUSE we have 
dropped it from our images, meaning kdewebkit can no longer be built.

All projects are advised that they should also be dropping any dependency they 
have on kdewebkit / QtWebKit as it is no longer supported within a CI system 
context.

CCMAIL: kde-frameworks-devel@kde.org
CCMAIL: kde-core-de...@kde.org
CCMAIL: kde-de...@kde.org
CCMAIL: kde-...@kde.org
CCMAIL: kmymoney-de...@kde.org

M  +0-1latest/frameworks.yml

https://invent.kde.org/sysadmin/ci-management/commit/41002ce5f78d403f001b269153410c5910f529f2

diff --git a/latest/frameworks.yml b/latest/frameworks.yml
index aa0e3cc..5a917d7 100644
--- a/latest/frameworks.yml
+++ b/latest/frameworks.yml
@@ -55,7 +55,6 @@
 "frameworks/kdbusaddons": "kf5"
 "frameworks/kdeclarative": "kf5"
 "frameworks/kdesignerplugin": "kf5"
-"frameworks/kdewebkit": "kf5"
 "frameworks/kdnssd": "kf5"
 "frameworks/kdoctools": "kf5"
 "frameworks/kemoticons": "kf5"


Re: Gitlab Downtime

2023-04-11 Thread Ben Cooksley
On Tue, Apr 11, 2023 at 9:23 PM Ben Cooksley  wrote:

> Hi all,
>
> Tomorrow I will need to conduct some maintenance on our Gitlab instance
> which may take approximately 60 to 90 minutes in time, depending on how
> things go.
>
> This downtime is needed to facilitate the update of several components
> that Gitlab relies upon, including the underlying Ruby interpreter and will
> pave the way for us to migrate to a newer version of GitLab in the coming
> days - as well as a replacement GitLab server (which can now be considered
> as GitLab have finally moved to using Ruby 3).
>
> All going well, i'm hopeful the maintenance will take significantly less
> time than i'm allowing for here.
>
> Maintenance will start at approximately 1930 NZST ( 0730 UTC ) on 11
> April, during which time all services associated with Gitlab (including SSO
> login, Git repositories, etc) will be unavailable.
>

Small typo correction here: 12 April, ie. tomorrow.


>
> Apologies in advance for the disruption.
>
> Many thanks,
> Ben
>

Thanks,
Ben


Gitlab Downtime

2023-04-11 Thread Ben Cooksley
Hi all,

Tomorrow I will need to conduct some maintenance on our Gitlab instance
which may take approximately 60 to 90 minutes in time, depending on how
things go.

This downtime is needed to facilitate the update of several components that
Gitlab relies upon, including the underlying Ruby interpreter and will pave
the way for us to migrate to a newer version of GitLab in the coming days -
as well as a replacement GitLab server (which can now be considered as
GitLab have finally moved to using Ruby 3).

All going well, i'm hopeful the maintenance will take significantly less
time than i'm allowing for here.

Maintenance will start at approximately 1930 NZST ( 0730 UTC ) on 11 April,
during which time all services associated with Gitlab (including SSO login,
Git repositories, etc) will be unavailable.

Apologies in advance for the disruption.

Many thanks,
Ben


Re: kf6 vs. kf5 conflict report

2023-03-10 Thread Ben Cooksley
On Thu, Mar 9, 2023 at 4:56 AM Aleix Pol  wrote:

> On Wed, Mar 8, 2023 at 3:13 PM Nicolas Fella  wrote:
> >
> > On 3/8/23 14:02, Harald Sitter wrote:
> > > with kf6 progressing nicely here's a first conflict report of files
> > > that appear in both kf6 and kf5 under the same name. this largely
> > > affects translations and docs it seems. this list may not be entirely
> > > comprehensive, I've only thrown together a script in a couple minutes.
> >
> > Thanks Harald!
> >
> > > one question is whether ECM should be co-installable, not sure if that
> > > has been discussed
> >
> > It has come up, and the answer seems to be "No, it will not be
> > coinstallable". This implies that ECM master will continue to support
> > Qt5/KF5, but that should not be a huge burden.
>

>From my perspective this has been incredibly poorly communicated to the
point that it is not an actual valid decision.

It is also not what was set in the branch-rules.yml files within the
metadata (which was committed by a Frameworks devel) and was not what was
confirmed by Frameworks developers when I put together the list of projects
to have KF5 master branch builds removed from the CI artifacts store.

This state of affairs has been the source of a degree of CI breakage we
have been experiencing (things are a mess at the moment, I don't even want
to look at any of it)


> >
> > > report for /usr:
> > > https://collaborate.kde.org/s/3gz2KfoGLsS4TF5
> > >
> > > furthermore the following files outside /usr clash between kf6 and 5:
> > > '/etc/xdg/accept-languages.codes'
> > > '/etc/xdg/kshorturifilterrc'
> > > '/etc/xdg/autostart/baloo_file.desktop'
> > > '/lib/udev/rules.d/61-kde-bluetooth-rfkill.rules'
> > >
> > > HS
>
> If ECM master has to support KF5, why do we have a kf5 branch? In
> fact, I'm pretty sure I switched it eventually because there were
> regressions.
>
> Aleix
>

Regards,
Ben


CI Outage

2023-02-16 Thread Ben Cooksley
Hi all,

As many of you will have noticed, the Linux side of our Gitlab CI setup,
including those runs for Android and other miscellaneous jobs (such as
cppcheck) were all KO yesterday due to a Docker error.

This has now been corrected, and was due to a defect in an update shipped
by the Docker upstream maintainers. They added a hard dependency on an
Apparmor CLI tool, however failed to add the corresponding dependency to
their packages (and to top matters off, part of that package is kernel side
and only does it's initialisation on system boot...)

The nodes have now had their setups corrected and have been rebooted, and
everything is back in service. I have also kicked off builds of just about
everything that is failing on the CI system.

Apologies for the disruption here.

While doing so I noticed a fairly sad number of actual
failures-to-build-from-source:
- Mandatory X11 dependency on Windows:
https://invent.kde.org/utilities/keditbookmarks
- Blind KF6 porting: https://invent.kde.org/pim/kontact/-/jobs/786385
- (and others yet to finish working their way through)

It also looks like we have some packages that are not being rebuilt by seed
jobs (see https://invent.kde.org/pim/kalendar/-/jobs/786366), so once the
system has finished playing catch up we may need to do some additional
infrastructure work there.

Please contact me if you'd like to assist with that.

Thanks,
Ben


Re: [frameworks/knewstuff] src/core: KNSCoree::Engine: Use QUrl for reading providerFileUrl

2023-02-08 Thread Ben Cooksley
Hi Alexander,

With regards to the below change to KNewStuff has it been rigorously tested
to ensure that this change does not impact on how it communicates with and
behaves with server-side infrastructure?

I can appreciate that it looks fairly safe and harmless, however i've been
burned too many times by QNetworkAccessManager and it's associated classes
not to ask that we explicitly test and check that the behaviour remains
correct.

Thanks,
Ben

On Thu, Feb 9, 2023 at 8:14 AM Alexander Lohnau  wrote:

> Git commit 44f327eee36a1065cc4415d8f412b734609b7d00 by Alexander Lohnau.
> Committed on 06/02/2023 at 20:49.
> Pushed by alex into branch 'master'.
>
> KNSCoree::Engine: Use QUrl for reading providerFileUrl
>
> M  +9-10   src/core/engine.cpp [INFRASTRUCTURE]
>
>
> https://invent.kde.org/frameworks/knewstuff/commit/44f327eee36a1065cc4415d8f412b734609b7d00
>
> diff --git a/src/core/engine.cpp b/src/core/engine.cpp
> index 90982272..0bf92662 100644
> --- a/src/core/engine.cpp
> +++ b/src/core/engine.cpp
> @@ -54,7 +54,7 @@
>
>  using namespace KNSCore;
>
> -typedef QHash EngineProviderLoaderHash;
> +typedef QHash EngineProviderLoaderHash;
>  Q_GLOBAL_STATIC(QThreadStorage,
> s_engineProviderLoaders)
>
>  class EnginePrivate
> @@ -154,8 +154,7 @@ public:
>  QSharedPointer cache;
>  QTimer *searchTimer = new QTimer();
>  // The url of the file containing information about content providers
> -/// TODO KF6 This really wants to be turned into a QUrl (which will
> have implications for our public API, so not doing it just now)
> -QString providerFileUrl;
> +QUrl providerFileUrl;
>  // Categories from knsrc file
>  QStringList categories;
>
> @@ -272,14 +271,14 @@ bool Engine::init(const QString &configfile)
>  d->uploadEnabled = group.readEntry("UploadEnabled", true);
>  Q_EMIT uploadEnabledChanged();
>
> -d->providerFileUrl = group.readEntry("ProvidersUrl", QStringLiteral("
> https://autoconfig.kde.org/ocs/providers.xml";));
> -if (d->providerFileUrl == QLatin1String("
> https://download.kde.org/ocs/providers.xml";)) {
> -d->providerFileUrl = QStringLiteral("
> https://autoconfig.kde.org/ocs/providers.xml";);
> +d->providerFileUrl = group.readEntry("ProvidersUrl",
> QUrl(QStringLiteral("https://autoconfig.kde.org/ocs/providers.xml";)));
> +if (d->providerFileUrl.toString() == QLatin1String("
> https://download.kde.org/ocs/providers.xml";)) {
> +d->providerFileUrl = QUrl(QStringLiteral("
> https://autoconfig.kde.org/ocs/providers.xml";));
>  qCWarning(KNEWSTUFFCORE) << "Please make sure" << configfile <<
> "has ProvidersUrl=https://autoconfig.kde.org/ocs/providers.xml";;
>  }
>  if (group.readEntry("UseLocalProvidersFile", "false").toLower() ==
> QLatin1String{"true"}) {
>  // The local providers file is called "appname.providers", to
> match "appname.knsrc"
> -d->providerFileUrl =
> QUrl::fromLocalFile(QLatin1String("%1.providers").arg(configFullPath.left(configFullPath.length()
> - 6))).toString();
> +d->providerFileUrl =
> QUrl::fromLocalFile(QLatin1String("%1.providers").arg(configFullPath.left(configFullPath.length()
> - 6)));
>  }
>
>  d->tagFilter = group.readEntry("TagFilter",
> QStringList(QStringLiteral("ghns_excluded!=1")));
> @@ -404,7 +403,7 @@ void Engine::loadProviders()
>  }
>  }
>  });
> -loader->load(QUrl(d->providerFileUrl));
> +loader->load(d->providerFileUrl);
>  }
>  connect(loader, &XmlLoader::signalLoaded, this,
> &Engine::slotProviderFileLoaded);
>  connect(loader, &XmlLoader::signalFailed, this,
> &Engine::slotProvidersFailed);
> @@ -425,7 +424,7 @@ void Engine::slotProviderFileLoaded(const QDomDocument
> &doc)
>  } else if (providers.tagName() != QLatin1String("ghnsproviders") &&
> providers.tagName() != QLatin1String("knewstuffproviders")) {
>  qWarning() << "No document in providers.xml.";
>  Q_EMIT signalErrorCode(KNSCore::ProviderError,
> -   i18n("Could not load get hot new stuff
> providers from file: %1", d->providerFileUrl),
> +   i18n("Could not load get hot new stuff
> providers from file: %1", d->providerFileUrl.toString()),
> d->providerFileUrl);
>  return;
>  }
> @@ -507,7 +506,7 @@ void Engine::providerJobStarted(KJob *job)
>
>  void Engine::slotProvidersFailed()
>  {
> -Q_EMIT signalErrorCode(KNSCore::ProviderError, i18n("Loading of
> providers from file: %1 failed", d->providerFileUrl), d->providerFileUrl);
> +Q_EMIT signalErrorCode(KNSCore::ProviderError, i18n("Loading of
> providers from file: %1 failed", d->providerFileUrl.toString()),
> d->providerFileUrl);
>  }
>
>  void Engine::providerInitialized(Provider *p)
>
>


[sysadmin/ci-utilities] components: Banish the Frameworks Wayland Client log lines as well.

2022-11-02 Thread Ben Cooksley
Git commit 911af65242dc46fa873d9ba50026618f8d14769b by Ben Cooksley.
Committed on 02/11/2022 at 07:48.
Pushed by bcooksley into branch 'master'.

Banish the Frameworks Wayland Client log lines as well.
It is also extremely chatty in KWin log files and represents 15% of the size of 
the CI run logs.

CCMAIL: kde-frameworks-devel@kde.org
CCMAIL: plasma-de...@kde.org

M  +1-1components/TestHandler.py

https://invent.kde.org/sysadmin/ci-utilities/commit/911af65242dc46fa873d9ba50026618f8d14769b

diff --git a/components/TestHandler.py b/components/TestHandler.py
index e115456..3d879bd 100644
--- a/components/TestHandler.py
+++ b/components/TestHandler.py
@@ -65,7 +65,7 @@ def run( projectConfig, sourcesPath, buildPath, installPath, 
buildEnvironment ):
 
 # We want Qt to be noisy about debug output to make debugging tests easier
 # Some stuff is so verbose it hits the testlib maxwarnings limits though
-buildEnvironment['QT_LOGGING_RULES'] = 
"*.debug=true;qt.text.font.db=false;kf.globalaccel.kglobalacceld=false"
+buildEnvironment['QT_LOGGING_RULES'] = 
"*.debug=true;qt.text.font.db=false;kf.globalaccel.kglobalacceld=false;kf.wayland.client=false"
 # We want to force Qt to print to stderr, even on Windows
 buildEnvironment['QT_LOGGING_TO_CONSOLE'] = '1'
 buildEnvironment['QT_FORCE_STDERR_LOGGING'] = '1'


[sysadmin/ci-utilities] components: Silence KGlobalAccel debug output from KDE CI test logs.

2022-11-02 Thread Ben Cooksley
Git commit 93d26cd29d1638c240dbe670b388afb07429709f by Ben Cooksley.
Committed on 02/11/2022 at 07:35.
Pushed by bcooksley into branch 'master'.

Silence KGlobalAccel debug output from KDE CI test logs.
It is far too chatty with KWin at least and represents 57% of it's CI run log 
lines on Linux.

CCMAIL: kde-frameworks-devel@kde.org
CCMAIL: plasma-de...@kde.org

M  +1-1components/TestHandler.py

https://invent.kde.org/sysadmin/ci-utilities/commit/93d26cd29d1638c240dbe670b388afb07429709f

diff --git a/components/TestHandler.py b/components/TestHandler.py
index e92154c..e115456 100644
--- a/components/TestHandler.py
+++ b/components/TestHandler.py
@@ -65,7 +65,7 @@ def run( projectConfig, sourcesPath, buildPath, installPath, 
buildEnvironment ):
 
 # We want Qt to be noisy about debug output to make debugging tests easier
 # Some stuff is so verbose it hits the testlib maxwarnings limits though
-buildEnvironment['QT_LOGGING_RULES'] = "*.debug=true;qt.text.font.db=false"
+buildEnvironment['QT_LOGGING_RULES'] = 
"*.debug=true;qt.text.font.db=false;kf.globalaccel.kglobalacceld=false"
 # We want to force Qt to print to stderr, even on Windows
 buildEnvironment['QT_LOGGING_TO_CONSOLE'] = '1'
 buildEnvironment['QT_FORCE_STDERR_LOGGING'] = '1'


Moving CI to Qt 6.4

2022-10-02 Thread Ben Cooksley
Hi all,

As part of recent updates to the CI system i've been revising our Qt 6
image to move it to Qt 6.4.

This has shown alas that KIO fails to build from source:
https://invent.kde.org/sysadmin/ci-management/-/jobs/505902

Can someone please take a look?

Thanks,
Ben


Re: Gitlab CI Dashboards and retirement of build.kde.org

2022-09-04 Thread Ben Cooksley
On Sun, Sep 4, 2022 at 8:51 PM Gilles Caulier 
wrote:

> Hi Ben,
>

HI Gilles,


>
> With build/binary-factory , it was possible to get an Embeddable Build
> Status Icon as this one :
>
>
> https://binary-factory.kde.org/view/AppImage/job/Digikam_Nightly_appimage-centos7/badge/
>
> Does this feature still exist with Gitlab infrastructure ?
>

Yes, quoting my earlier email:

[quote]
Gitlab provides a limited selection of badges - which can be found at:
- https://invent.kde.org/multimedia/kdenlive/badges/master/pipeline.svg
- https://invent.kde.org/multimedia/kdenlive/badges/master/coverage.svg
- https://invent.kde.org/multimedia/kdenlive/-/badges/release.svg
[/quote]

You'll need to swap multimedia/kdenlive to graphics/digikam but otherwise
that should work fine.

Please note that the Binary Factory is not impacted by this, so anything
relating to the Binary Factory is unchanged at this time.


>
> Thanks
>
> Gilles
>

Regards,
Ben


>
> Le sam. 27 août 2022 à 11:45, Ben Cooksley  a écrit :
> >
> > Hi all,
> >
> > This evening I completed the necessary setup required to complete our
> Gitlab CI dashboards, which can now be found at
> https://metrics.kde.org/dashboards/f/aNxvXJW4k/gitlab-ci (KDE Developer
> account login required)
> >
> > These allow any developer to get a view on the current CI status of
> projects and groups of projects on a branch and platform basis - and should
> hopefully provide useful insight into the overall status that can currently
> be obtained from Jenkins.
> >
> > As this was the last thing holding us back from shutting down
> build.kde.org, i'd like to proceed with retiring and shutting down
> build.kde.org as soon as possible so the capacity it occupies can be
> released and reallocated to Gitlab.
> >
> > If anyone would like to experiment with customised views for their own
> purposes (where the above provided ones are insufficient) please file a
> Sysadmin ticket.
> >
> > Please let me know if there are any questions on the above.
> >
> > Thanks,
> > Ben
>


Notice of impending change to Gitlab CI

2022-09-04 Thread Ben Cooksley
Hi all,

Currently our Gitlab CI jobs for Linux (SUSE) and Android (Ubuntu) run
their respective jobs as root within the Docker containers that Gitlab
spawns for them.

This is a restriction that was previously required by the simultaneous use
of these same images by Jenkins, which following the shutdown of
build.kde.org this weekend is no longer a problem.

I will look into making this adjustment to our CI image(s) in the coming
week for Linux (SUSE). Once implemented, tests which made use of root
privileges may begin to fail (and those that were incompatible with it will
begin to pass).

Android Qt 5 builds still share the image with the Binary Factory and
therefore will have to continue to run as root at this time until we are
able to discontinue the BInary Factory (or at least, Android's use of it).

Thanks,
Ben


Re: Gitlab CI Dashboards and retirement of build.kde.org

2022-09-03 Thread Ben Cooksley
On Sun, Sep 4, 2022 at 7:54 AM Johnny Jazeix  wrote:

>
>
> Le sam. 3 sept. 2022 à 21:28, Ben Cooksley  a écrit :
>
>> On Sat, Sep 3, 2022 at 9:29 PM Gleb Popov <6year...@gmail.com> wrote:
>>
>>> On Sat, Sep 3, 2022 at 7:46 AM Ben Cooksley  wrote:
>>> >
>>> > As previously indicated, I have now shutdown build.kde.org along with
>>> the domain that supported it's version of the CI tooling.
>>> > The repository containing that tooling has now also been archived, and
>>> the former build.kde.org domain has been redirected to metrics.kde.org.
>>> >
>>> > The server which was acting as a builder for build.kde.org will be
>>> rebuilt in the coming days and reallocated to support Gitlab CI workloads.
>>> >
>>> > Thanks,
>>> > Ben
>>>
>>> What should be used instead of binary-factory? How do I transform this
>>> link?
>>>
>>>
>>> https://binary-factory.kde.org/view/Windows%2064-bit/job/Kate_Release_win64/1762/artifact/kate-22.08.0-1762-windows-msvc2019_64-cl.exe
>>
>>
>> At this time the Binary Factory is not impacted by this.
>>
>> Regards,
>> Ben
>>
>
> Hi,
>
> I think the issue mentioned by Glen is that this link (and all other
> artifacts from binary-factory) is redirected to
> https://build-artifacts.kde.org/binary-factory/Kate_Release_win64/1762/kate-22.08.0-1762-windows-msvc2019_64-cl.exe
> which does not exist.
>

Oops. That is an oversight on my part - apologies - and has now been
corrected (although the URLs will have changed)

Cheers,
Ben


>
>
> Cheers,
> Johnny
>


Re: Gitlab CI Dashboards and retirement of build.kde.org

2022-09-03 Thread Ben Cooksley
On Sun, Sep 4, 2022 at 2:13 AM Michael Reeves  wrote:

> I now have no way to even test macosx builds for kdiff3, I have no access
> to a 64bit Intel mac. What are the plans for this and Windows
> builds. I have a functional windows based craft installed locally.
>

At this time the Binary Factory is unaffected by these changes, however
steps will be made in the coming weeks/months to migrate away from the
Binary Factory to equivalent Gitlab jobs (although they won't be available
for Merge Requests due to various technical limitations)

Regards,
Ben


>
>
> Sep 3, 2022 12:47:06 AM Ben Cooksley :
>
> On Sat, Aug 27, 2022 at 9:44 PM Ben Cooksley  wrote:
>
>> Hi all,
>>
>> This evening I completed the necessary setup required to complete our
>> Gitlab CI dashboards, which can now be found at
>> https://metrics.kde.org/dashboards/f/aNxvXJW4k/gitlab-ci (KDE Developer
>> account login required)
>>
>> These allow any developer to get a view on the current CI status of
>> projects and groups of projects on a branch and platform basis - and should
>> hopefully provide useful insight into the overall status that can currently
>> be obtained from Jenkins.
>>
>> As this was the last thing holding us back from shutting down
>> build.kde.org, i'd like to proceed with retiring and shutting down
>> build.kde.org as soon as possible so the capacity it occupies can be
>> released and reallocated to Gitlab.
>>
>
> As previously indicated, I have now shutdown build.kde.org along with the
> domain that supported it's version of the CI tooling.
> The repository containing that tooling has now also been archived, and the
> former build.kde.org domain has been redirected to metrics.kde.org.
>
> The server which was acting as a builder for build.kde.org will be
> rebuilt in the coming days and reallocated to support Gitlab CI workloads.
>
>
>>
>> If anyone would like to experiment with customised views for their own
>> purposes (where the above provided ones are insufficient) please file a
>> Sysadmin ticket.
>>
>> Please let me know if there are any questions on the above.
>>
>> Thanks,
>> Ben
>>
>
> Thanks,
> Ben
>
>


Re: Gitlab CI Dashboards and retirement of build.kde.org

2022-09-03 Thread Ben Cooksley
On Sat, Sep 3, 2022 at 9:29 PM Gleb Popov <6year...@gmail.com> wrote:

> On Sat, Sep 3, 2022 at 7:46 AM Ben Cooksley  wrote:
> >
> > As previously indicated, I have now shutdown build.kde.org along with
> the domain that supported it's version of the CI tooling.
> > The repository containing that tooling has now also been archived, and
> the former build.kde.org domain has been redirected to metrics.kde.org.
> >
> > The server which was acting as a builder for build.kde.org will be
> rebuilt in the coming days and reallocated to support Gitlab CI workloads.
> >
> > Thanks,
> > Ben
>
> What should be used instead of binary-factory? How do I transform this
> link?
>
>
> https://binary-factory.kde.org/view/Windows%2064-bit/job/Kate_Release_win64/1762/artifact/kate-22.08.0-1762-windows-msvc2019_64-cl.exe


At this time the Binary Factory is not impacted by this.

Regards,
Ben


Re: Gitlab CI Dashboards and retirement of build.kde.org

2022-09-02 Thread Ben Cooksley
On Sat, Aug 27, 2022 at 9:44 PM Ben Cooksley  wrote:

> Hi all,
>
> This evening I completed the necessary setup required to complete our
> Gitlab CI dashboards, which can now be found at
> https://metrics.kde.org/dashboards/f/aNxvXJW4k/gitlab-ci (KDE Developer
> account login required)
>
> These allow any developer to get a view on the current CI status of
> projects and groups of projects on a branch and platform basis - and should
> hopefully provide useful insight into the overall status that can currently
> be obtained from Jenkins.
>
> As this was the last thing holding us back from shutting down
> build.kde.org, i'd like to proceed with retiring and shutting down
> build.kde.org as soon as possible so the capacity it occupies can be
> released and reallocated to Gitlab.
>

As previously indicated, I have now shutdown build.kde.org along with the
domain that supported it's version of the CI tooling.
The repository containing that tooling has now also been archived, and the
former build.kde.org domain has been redirected to metrics.kde.org.

The server which was acting as a builder for build.kde.org will be rebuilt
in the coming days and reallocated to support Gitlab CI workloads.


>
> If anyone would like to experiment with customised views for their own
> purposes (where the above provided ones are insufficient) please file a
> Sysadmin ticket.
>
> Please let me know if there are any questions on the above.
>
> Thanks,
> Ben
>

Thanks,
Ben


Re: Gitlab CI Dashboards and retirement of build.kde.org

2022-08-27 Thread Ben Cooksley
On Sun, Aug 28, 2022 at 4:40 AM Albert Astals Cid  wrote:

> El dissabte, 27 d’agost de 2022, a les 11:44:47 (CEST), Ben Cooksley va
> escriure:
> > Hi all,
> >
> > This evening I completed the necessary setup required to complete our
> > Gitlab CI dashboards, which can now be found at
> > https://metrics.kde.org/dashboards/f/aNxvXJW4k/gitlab-ci (KDE Developer
> > account login required)
> >
> > These allow any developer to get a view on the current CI status of
> > projects and groups of projects on a branch and platform basis - and
> should
> > hopefully provide useful insight into the overall status that can
> currently
> > be obtained from Jenkins.
> >
> > As this was the last thing holding us back from shutting down
> build.kde.org,
> > i'd like to proceed with retiring and shutting down build.kde.org as
> soon
> > as possible so the capacity it occupies can be released and reallocated
> to
> > Gitlab.
> >
> > If anyone would like to experiment with customised views for their own
> > purposes (where the above provided ones are insufficient) please file a
> > Sysadmin ticket.
> >
> > Please let me know if there are any questions on the above.
>
> Looks great.
>

Yay!


>
> One thing that i'm not sure i understand correctly, currently in the
> General
> Overview, it says that there are 3 projects currently failing kwin,
> kpackage
> and kphotoalbum, but then if i go to the Per Platform View i get that
> rkward
> is failing on windows. Shouldn't rkward also be listed as failing on the
> general overview?
>

That is a rather curious bug, caused by the fact it was looking at things
on a Pipeline vs. Job basis.

The query you were looking at was looking at the list of most recent
pipeline runs on a per project basis, which in the case of rkward means the
last push by scripty - which was skipped (so not a failure).
I've tweaked the query to look at things on a per job basis now, which
skips over that issue.


>
> Cheers,
>   Albert
>

Cheers,
Ben


>
> >
> > Thanks,
> > Ben
>
>
>
>
>


Gitlab CI Dashboards and retirement of build.kde.org

2022-08-27 Thread Ben Cooksley
Hi all,

This evening I completed the necessary setup required to complete our
Gitlab CI dashboards, which can now be found at
https://metrics.kde.org/dashboards/f/aNxvXJW4k/gitlab-ci (KDE Developer
account login required)

These allow any developer to get a view on the current CI status of
projects and groups of projects on a branch and platform basis - and should
hopefully provide useful insight into the overall status that can currently
be obtained from Jenkins.

As this was the last thing holding us back from shutting down build.kde.org,
i'd like to proceed with retiring and shutting down build.kde.org as soon
as possible so the capacity it occupies can be released and reallocated to
Gitlab.

If anyone would like to experiment with customised views for their own
purposes (where the above provided ones are insufficient) please file a
Sysadmin ticket.

Please let me know if there are any questions on the above.

Thanks,
Ben


[sysadmin/ci-management] /: Try to align the number of folders between seed jobs and normal CI jobs.

2022-04-06 Thread Ben Cooksley
Git commit 707e016918c0174235b1dc19883620f96f363572 by Ben Cooksley.
Committed on 07/04/2022 at 05:41.
Pushed by bcooksley into branch 'master'.

Try to align the number of folders between seed jobs and normal CI jobs.
This only affects Windows (guess everywhere else the path is absolute while on 
Windows it is relative) despite all our other jobs having the same layout.

CCMAIL: kde-frameworks-devel@kde.org

M  +8-4.gitlab-ci.yml

https://invent.kde.org/sysadmin/ci-management/commit/707e016918c0174235b1dc19883620f96f363572

diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index 96c828d..225ecf3 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -154,7 +154,8 @@ frameworks_windows_qt515:
   - .windows_qt515
   script:
 - . ci-utilities/resources/setup-msvc-env.ps1
-- python -u ci-utilities/seed-package-registry.py --seed-file 
latest/frameworks.yml --platform Windows
+- cd ..
+- python -u ci-management/ci-utilities/seed-package-registry.py 
--seed-file ci-management/latest/frameworks.yml --platform Windows
 
 ## Frameworks 6 jobs
 frameworks_suse_tumbleweed_qt62:
@@ -206,7 +207,8 @@ release_service_windows_qt515:
   - .windows_qt515
   script:
 - . ci-utilities/resources/setup-msvc-env.ps1
-- python -u ci-utilities/seed-package-registry.py --seed-file 
latest/release-service.yml --platform Windows
+- cd ..
+- python -u ci-management/ci-utilities/seed-package-registry.py 
--seed-file ci-management/latest/release-service.yml --platform Windows
 
 ## Plasma jobs
 
@@ -276,7 +278,8 @@ independent_release_windows_qt515:
   - .windows_qt515
   script:
 - . ci-utilities/resources/setup-msvc-env.ps1
-- python -u ci-utilities/seed-package-registry.py --seed-file 
latest/independent-release.yml --platform Windows
+- cd ..
+- python -u ci-management/ci-utilities/seed-package-registry.py 
--seed-file ci-management/latest/independent-release.yml --platform Windows
 
 ## PIM
 
@@ -311,4 +314,5 @@ pim_windows_qt515:
   - .windows_qt515
   script:
 - . ci-utilities/resources/setup-msvc-env.ps1
-- python -u ci-utilities/seed-package-registry.py --seed-file 
latest/pim.yml --platform Windows
+- cd ..
+- python -u ci-management/ci-utilities/seed-package-registry.py 
--seed-file ci-management/latest/pim.yml --platform Windows


Re: Unit tests all pass in Jenkins on Linux

2022-03-21 Thread Ben Cooksley
On Mon, Mar 21, 2022 at 9:43 AM David Faure  wrote:

> On dimanche 13 mars 2022 17:53:13 CET Ben Cooksley wrote:
> > On Mon, Mar 14, 2022 at 4:40 AM David Faure  wrote:
> > > After the recent discussions on state of CI, I fixed the last unittest
> > > failures (kio, purpose... + apol fixed ECM) so that
> > > https://build.kde.org/job/Frameworks/view/Platform%20-%20SUSEQt5.15/
> > > is all green^H^Hblue again.
> > > Please keep it that way!
> >
> > Thanks for looking into and fixing all of these David.
>
> Now I'd like to fix the remaining unittest failures on FreeBSD.
>
> I just fixed kcrash by reading the unittest code.
> However for the remaining ones, I need to actually debug on FreeBSD.
> Is there a FreeBSD virtual machine with the full setup already done for
> building KDE Frameworks, that I could either run locally or log into?
>

I believe Tobias may have some instructions on how to assemble a machine
(as these are what he used to assemble the VM images currently running the
FreeBSD environment on Gitlab and Jenkins).


>
> --
> David Faure, fa...@kde.org, http://www.davidfaure.fr
> Working on KDE Frameworks 5
>
>
Cheers,
Ben


Re: Windows unittests: KConfig

2022-03-21 Thread Ben Cooksley
On Mon, Mar 21, 2022 at 11:42 AM David Faure  wrote:

> On dimanche 20 mars 2022 22:13:17 CET Christoph Cullmann (cullmann.io)
> wrote:
> > On 2022-03-20 22:07, David Faure wrote:
> > > The KConfig unittests rely on DBus nowadays (for change notification).
> > > This is turned off on Android, and is a cmake option elsewhere,
> > > defaulting to ON.
> > >
> > > On Windows, I'm sure a lot of other modules rely on DBus, so I suppose
> > > it's just a matter of starting the dbus daemon for those modules that
> > > need it?
> > > Currently the kconfig tests fail for lack of dbus:
> > >
> > > 21:55:23  ERROR: The process "dbus-daemon.exe" not found.
> > >
> https://build.kde.org/job/Frameworks/view/Platform%20-%20WindowsMSVCQt5.15
> > > /job/kconfig/job/kf5-qt5%20WindowsMSVCQt5.15/217/console
> > Hi,
> >
> > actually, if one can just disable that on Windows, I would be rather in
> > favor of that.
> > Any dbus stuff is just a pain there and at least Okular/Kate/... as
> > packaged for Windows store avoid the use of any dbus calls.
>
> Makes sense.
>
> I was thinking "akonadi needs dbus anyway", but indeed, that doesn't apply
> to
> standalone apps, and the DBus stuff in KConfig seems to be mostly for
> workspace-level notifications (color theme changed, etc.).
>
> Made a merge request to turn this off on Windows by default:
> https://invent.kde.org/frameworks/kconfig/-/merge_requests/120


If we could head in the direction of being free of D-Bus on Windows (and
Mac as well I guess) then that would definitely be preferrable.
D-Bus makes no sense on those platforms and has only been a cause of issues.

Cheers,
Ben


>
>
> --
> David Faure, fa...@kde.org, http://www.davidfaure.fr
> Working on KDE Frameworks 5
>
>
>
>


Re: Unit tests all pass in Jenkins on Linux

2022-03-14 Thread Ben Cooksley
On Mon, Mar 14, 2022 at 4:44 PM Eduardo de Souza Cruz <
eduardo.c...@kdemail.net> wrote:

> Hi,
>
> Regarding the krunner timer-based test which I authored:
>
> We could just delete the few lines that are enforcing those upper-limit
> timeouts and add some comments explaining that in real life those timeouts
> should have been met, I don't think we need to delete the entire test. I
> was explicitly asked to write a test when I submitted the MR to enforce
> that this functionally wouldn't regress in the future.
>
> Also, I'm wondering... if we happen to have a compilation directive that's
> #defined in this CI environment only, or some framework function that
> returns weather we are in the CI environment or not, we could put some
> #ifndef (or if's) on those few upper-limit timeout lines and avoid
> compiling/running just those lines in the CI environment. I'm not familiar
> with this environment so I don't know if there is such a thing, I'm just
> wondering...
>

Not at this time i'm afraid.

There are some environment variables you can look to but those differ
between our Jenkins and Gitlab setups.


>
> I'm available to submit a quick MR if you'd like me to, just give me some
> directions for what you'd like me to do.
>
> [],
> Eduardo
>

Cheers,
Ben


>
> --
> *From:* Ben Cooksley 
> *Sent:* Sunday, March 13, 2022 1:53 PM
> *To:* KDE Frameworks 
> *Cc:* eduardo.c...@kdemail.net 
> *Subject:* Re: Unit tests all pass in Jenkins on Linux
>
> On Mon, Mar 14, 2022 at 4:40 AM David Faure  wrote:
>
> After the recent discussions on state of CI, I fixed the last unittest
> failures (kio, purpose... + apol fixed ECM) so that
> https://build.kde.org/job/Frameworks/view/Platform%20-%20SUSEQt5.15/
> is all green^H^Hblue again.
> Please keep it that way!
>
>
> Thanks for looking into and fixing all of these David.
>
>
>
> Note however that
>
> * kwayland has a flaky test:
>
>
> https://build.kde.org/job/Frameworks/view/Platform%20-%20SUSEQt5.15/job/kwayland/job/kf5-qt5%20SUSEQt5.15/171/testReport/junit/projectroot.autotests/client/kwayland_testDataDevice/
>
> FAIL!  : TestDataDevice::testReplaceSource() Compared values are not the
> same
>Actual   (selectionOfferedSpy.count()): 1
>Expected (2)  : 2
>Loc: [autotests/client/test_datadevice.cpp(557)]
>
> Who can look at this one? git log mostly shows Martin Flöser <
> mgraess...@kde.org>
> who I think isn't active anymore?
>
>
> Not sure if it applies to KWayland as well, but I know that KWin has load
> sensitive tests (which is why the Gitlab .kde-ci.yml files support the flag
> tests-load-sensitive)
> If this test appears to be flaky, then it is quite possible that it is
> load sensitive as well.
>
>
>
> * krunner has a flaky test [2] because it measures time spent and expects
> small values like 65ms
> (I changed that one to 100ms), 250ms, 300ms. With only 10% safety margins.
> On a busy CI system,
> this is bound to fail regularly, even with bigger safety margins. In my
> experience this kind of test
> is just not possible (we're not running on a real time OS), I vote for
> removing the test.
> CC'ing Eduardo.
>
>
> https://build.kde.org/job/Frameworks/view/Platform%20-%20SUSEQt5.15/job/krunner/job/kf5-qt5%20SUSEQt5.15/325/testReport/junit/projectroot/autotests/runnermanagertest/
>
>
> Yes, that will definitely fail more often than not - your only way to make
> sure tests like this pass on our CI system is to
> set tests-load-sensitive=True (in Gitlab CI)
> Note however that option should be avoided where possible as it means your
> build will stop and wait for load to fall to low levels before proceeding
> with running tests - which blocks a CI worker slot from being used by
> another project.
>
> I'd also be in favour of removing this test.
>
>
>
> --
> David Faure, fa...@kde.org, http://www.davidfaure.fr
> Working on KDE Frameworks 5
>
>
>
>
> Cheers,
> Ben
>


Re: KDE CI: Frameworks » kquickcharts » kf5-qt5 WindowsMSVCQt5.15 - Build # 101 - Still Failing!

2022-03-14 Thread Ben Cooksley
Hi all,

Can someone please confirm whether it is expected that KQuickCharts is
Linux/FreeBSD only given that it is QtQuick based (and therefore should be
fairly platform agnostic?)

Cheers,
Ben

On Mon, Mar 14, 2022 at 6:01 PM CI System  wrote:

> *BUILD FAILURE*
> Build URL
> https://build.kde.org/job/Frameworks/job/kquickcharts/job/kf5-qt5%20WindowsMSVCQt5.15/101/
> Project: kf5-qt5 WindowsMSVCQt5.15
> Date of build: Mon, 14 Mar 2022 05:00:46 +
> Build duration: 23 sec and counting
> * CONSOLE OUTPUT *
> [...truncated 140 lines...]
> [2022-03-14T05:01:02.160Z] PROCESSOR_REVISION = '0102'
> [2022-03-14T05:01:02.160Z] PROGRAMDATA = 'C:\ProgramData'
> [2022-03-14T05:01:02.160Z] PROGRAMFILES = 'C:\Program Files'
> [2022-03-14T05:01:02.160Z] PROGRAMFILES(X86) = 'C:\Program Files (x86)'
> [2022-03-14T05:01:02.160Z] PROGRAMW6432 = 'C:\Program Files'
> [2022-03-14T05:01:02.160Z] PROMPT = '$P$G'
> [2022-03-14T05:01:02.160Z] PSMODULEPATH =
> '%ProgramFiles%\WindowsPowerShell\Modules;C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules'
> [2022-03-14T05:01:02.160Z] PUBLIC = 'C:\Users\Public'
> [2022-03-14T05:01:02.160Z] RUN_ARTIFACTS_DISPLAY_URL = '
> https://build.kde.org/job/Frameworks/job/kquickcharts/job/kf5-qt5%20WindowsMSVCQt5.15/101/display/redirect?page=artifacts
> '
> [2022-03-14T05:01:02.160Z] RUN_CHANGES_DISPLAY_URL = '
> https://build.kde.org/job/Frameworks/job/kquickcharts/job/kf5-qt5%20WindowsMSVCQt5.15/101/display/redirect?page=changes
> '
> [2022-03-14T05:01:02.160Z] RUN_DISPLAY_URL = '
> https://build.kde.org/job/Frameworks/job/kquickcharts/job/kf5-qt5%20WindowsMSVCQt5.15/101/display/redirect
> '
> [2022-03-14T05:01:02.160Z] RUN_TESTS_DISPLAY_URL = '
> https://build.kde.org/job/Frameworks/job/kquickcharts/job/kf5-qt5%20WindowsMSVCQt5.15/101/display/redirect?page=tests
> '
> [2022-03-14T05:01:02.160Z] STAGE_NAME = 'Configuring Build'
> [2022-03-14T05:01:02.160Z] SYSTEMDRIVE = 'C:'
> [2022-03-14T05:01:02.160Z] SYSTEMROOT = 'C:\WINDOWS'
> [2022-03-14T05:01:02.160Z] TEMP = 'C:\Users\Jenkins\AppData\Local\Temp'
> [2022-03-14T05:01:02.160Z] TMP = 'C:\Users\Jenkins\AppData\Local\Temp'
> [2022-03-14T05:01:02.160Z] UCRTVERSION = '10.0.19041.0'
> [2022-03-14T05:01:02.160Z] UNIVERSALCRTSDKDIR = 'C:\Program Files
> (x86)\Windows Kits\10\'
> [2022-03-14T05:01:02.160Z] USERDOMAIN = 'DESKTOP-9TVNRIT'
> [2022-03-14T05:01:02.160Z] USERNAME = 'Jenkins'
> [2022-03-14T05:01:02.160Z] USERPROFILE = 'C:\Users\Jenkins'
> [2022-03-14T05:01:02.160Z] VCIDEINSTALLDIR = 'C:\Program Files
> (x86)\Microsoft Visual Studio\2019\Professional\Common7\IDE\VC\'
> [2022-03-14T05:01:02.160Z] VCINSTALLDIR = 'C:\Program Files
> (x86)\Microsoft Visual Studio\2019\Professional\VC\'
> [2022-03-14T05:01:02.160Z] VCTOOLSINSTALLDIR = 'C:\Program Files
> (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30037\'
> [2022-03-14T05:01:02.160Z] VCTOOLSREDISTDIR = 'C:\Program Files
> (x86)\Microsoft Visual Studio\2019\Professional\VC\Redist\MSVC\14.29.30036\'
> [2022-03-14T05:01:02.160Z] VCTOOLSVERSION = '14.29.30037'
> [2022-03-14T05:01:02.160Z] VISUALSTUDIOVERSION = '16.0'
> [2022-03-14T05:01:02.160Z] VS160COMNTOOLS = 'C:\Program Files
> (x86)\Microsoft Visual Studio\2019\Professional\Common7\Tools\'
> [2022-03-14T05:01:02.160Z] VSCMD_ARG_APP_PLAT = 'Desktop'
> [2022-03-14T05:01:02.160Z] VSCMD_ARG_HOST_ARCH = 'x64'
> [2022-03-14T05:01:02.160Z] VSCMD_ARG_TGT_ARCH = 'x64'
> [2022-03-14T05:01:02.160Z] VSCMD_VER = '16.10.2'
> [2022-03-14T05:01:02.160Z] VSINSTALLDIR = 'C:\Program Files
> (x86)\Microsoft Visual Studio\2019\Professional\'
> [2022-03-14T05:01:02.160Z] WINDIR = 'C:\WINDOWS'
> [2022-03-14T05:01:02.160Z] WINDOWSLIBPATH = 'C:\Program Files
> (x86)\Windows Kits\10\UnionMetadata\10.0.19041.0;C:\Program Files
> (x86)\Windows Kits\10\References\10.0.19041.0'
> [2022-03-14T05:01:02.160Z] WINDOWSSDKBINPATH = 'C:\Program Files
> (x86)\Windows Kits\10\bin\'
> [2022-03-14T05:01:02.160Z] WINDOWSSDKDIR = 'C:\Program Files (x86)\Windows
> Kits\10\'
> [2022-03-14T05:01:02.160Z] WINDOWSSDKLIBVERSION = '10.0.19041.0\'
> [2022-03-14T05:01:02.161Z] WINDOWSSDKVERBINPATH = 'C:\Program Files
> (x86)\Windows Kits\10\bin\10.0.19041.0\'
> [2022-03-14T05:01:02.161Z] WINDOWSSDKVERSION = '10.0.19041.0\'
> [2022-03-14T05:01:02.161Z] WORKSPACE = 'C:/CI/Job Build'
> [2022-03-14T05:01:02.161Z] WORKSPACE_TMP = 'C:/CI/Job Build@tmp'
> [2022-03-14T05:01:02.161Z] __DEVINIT_PATH = 'C:\Program Files
> (x86)\Microsoft Visual
> Studio\2019\Professional\Common7\Tools\devinit\devinit.exe'
> [2022-03-14T05:01:02.161Z] __DOTNET_ADD_64BIT = '1'
> [2022-03-14T05:01:02.161Z] __DOTNET_PREFERRED_BITNESS = '64'
> [2022-03-14T05:01:02.161Z] __VSCMD_PREINIT_PATH = 'C:\Program Files
> (x86)\Common Files\Oracle\Java\javapath;C:\Program
> Files\Python38\Scripts\;C:\Program
> Files\Python38\;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program
> Files\Git\cmd;C:\Users\Jenkins\AppData\Local\M

Re: Unit tests all pass in Jenkins on Linux

2022-03-13 Thread Ben Cooksley
On Mon, Mar 14, 2022 at 4:40 AM David Faure  wrote:

> After the recent discussions on state of CI, I fixed the last unittest
> failures (kio, purpose... + apol fixed ECM) so that
> https://build.kde.org/job/Frameworks/view/Platform%20-%20SUSEQt5.15/
> is all green^H^Hblue again.
> Please keep it that way!
>

Thanks for looking into and fixing all of these David.


>
> Note however that
>
> * kwayland has a flaky test:
>
>
> https://build.kde.org/job/Frameworks/view/Platform%20-%20SUSEQt5.15/job/kwayland/job/kf5-qt5%20SUSEQt5.15/171/testReport/junit/projectroot.autotests/client/kwayland_testDataDevice/
>
> FAIL!  : TestDataDevice::testReplaceSource() Compared values are not the
> same
>Actual   (selectionOfferedSpy.count()): 1
>Expected (2)  : 2
>Loc: [autotests/client/test_datadevice.cpp(557)]
>
> Who can look at this one? git log mostly shows Martin Flöser <
> mgraess...@kde.org>
> who I think isn't active anymore?
>

Not sure if it applies to KWayland as well, but I know that KWin has load
sensitive tests (which is why the Gitlab .kde-ci.yml files support the flag
tests-load-sensitive)
If this test appears to be flaky, then it is quite possible that it is load
sensitive as well.


>
> * krunner has a flaky test [2] because it measures time spent and expects
> small values like 65ms
> (I changed that one to 100ms), 250ms, 300ms. With only 10% safety margins.
> On a busy CI system,
> this is bound to fail regularly, even with bigger safety margins. In my
> experience this kind of test
> is just not possible (we're not running on a real time OS), I vote for
> removing the test.
> CC'ing Eduardo.
>
>
> https://build.kde.org/job/Frameworks/view/Platform%20-%20SUSEQt5.15/job/krunner/job/kf5-qt5%20SUSEQt5.15/325/testReport/junit/projectroot/autotests/runnermanagertest/


Yes, that will definitely fail more often than not - your only way to make
sure tests like this pass on our CI system is to
set tests-load-sensitive=True (in Gitlab CI)
Note however that option should be avoided where possible as it means your
build will stop and wait for load to fall to low levels before proceeding
with running tests - which blocks a CI worker slot from being used by
another project.

I'd also be in favour of removing this test.


>
> --
> David Faure, fa...@kde.org, http://www.davidfaure.fr
> Working on KDE Frameworks 5
>
>
>
>
Cheers,
Ben


Re: Hard dependency on plasma-framework in Alkimia

2022-03-10 Thread Ben Cooksley
On Thu, Mar 10, 2022 at 9:12 AM Thomas Baumgart  wrote:

> Hi Ben,
>

Hi Thomas,


>
> On Mittwoch, 9. März 2022 11:09:53 CET Ben Cooksley wrote:
>
> > Hi Thomas,
> >
> > Recently some changes were introduced to Frameworks which means that they
> > now enforce more rigorously the platforms on which they build.
> >
> > This means that Plasma Framework is no longer available on Windows -
> > unfortunately though it looks like Alkimia has a mandatory dependency on
> > Plasma Framework.
> >
> > Are you able to make this optional or should we disable Windows CI builds
> > for Alkimia (and the projects that depend on it)?
>
> I just accepted https://invent.kde.org/office/alkimia/-/merge_requests/17
> so you
> can re-enable Alkimia builds once this is merged.
>

Thanks for getting that merged in.


>
> p.s. quite a short time frame from notice until cut-off for my taste.
>

That was a temporary measure only as I wanted to make sure there weren't
any other issues in other projects that also needed addressing.
I wanted to get this sorted as soon as reasonably possible to allow for
progress to be made on Windows CI on Gitlab.


>
> --
>
> Regards
>
> Thomas Baumgart
>
> https://www.signal.org/   Signal, the better WhatsApp
> -
> Linux, because rebooting is for adding new hardware ...
> -
>

Cheers,
Ben


[sysadmin/ci-tooling] local-metadata: Re-enable Alkimia CI on Windows now that the hard dependency issues have been resolved.

2022-03-10 Thread Ben Cooksley
Git commit 9ef1b7c455e1eeba332593eb8cd3e722bca1d33a by Ben Cooksley.
Committed on 10/03/2022 at 08:42.
Pushed by bcooksley into branch 'master'.

Re-enable Alkimia CI on Windows now that the hard dependency issues have been 
resolved.

CCMAIL: kde-frameworks-devel@kde.org
CCMAIL: kmymoney-de...@kde.org

M  +0-1local-metadata/project-ignore-rules.yaml

https://invent.kde.org/sysadmin/ci-tooling/commit/9ef1b7c455e1eeba332593eb8cd3e722bca1d33a

diff --git a/local-metadata/project-ignore-rules.yaml 
b/local-metadata/project-ignore-rules.yaml
index e42d203..5d7ecfd 100644
--- a/local-metadata/project-ignore-rules.yaml
+++ b/local-metadata/project-ignore-rules.yaml
@@ -22,7 +22,6 @@
 - 'kde/workspace/libksysguard'
 - 'kde/kdenetwork/kaccounts-integration'
 - 'extragear/libs/pulseaudio-qt'
-- 'extragear/office/alkimia'
 - "kde/pim/libkleo"
 
 'FreeBSDQt5.15':


[sysadmin/ci-tooling] local-metadata: Alkimia has a hard dependency on Plasma Framework by default on Windows, and Plasma Framework is no longer available on Windows.

2022-03-09 Thread Ben Cooksley
Git commit 9b120b06156140e0c2657de5f9306620812f7d40 by Ben Cooksley.
Committed on 09/03/2022 at 17:39.
Pushed by bcooksley into branch 'master'.

Alkimia has a hard dependency on Plasma Framework by default on Windows, and 
Plasma Framework is no longer available on Windows.
Therefore we have to disable Alkimia builds on Windows.

CCMAIL: kde-frameworks-devel@kde.org
CCMAIL: kmymoney-de...@kde.org

M  +1-0local-metadata/project-ignore-rules.yaml

https://invent.kde.org/sysadmin/ci-tooling/commit/9b120b06156140e0c2657de5f9306620812f7d40

diff --git a/local-metadata/project-ignore-rules.yaml 
b/local-metadata/project-ignore-rules.yaml
index 5d7ecfd..e42d203 100644
--- a/local-metadata/project-ignore-rules.yaml
+++ b/local-metadata/project-ignore-rules.yaml
@@ -22,6 +22,7 @@
 - 'kde/workspace/libksysguard'
 - 'kde/kdenetwork/kaccounts-integration'
 - 'extragear/libs/pulseaudio-qt'
+- 'extragear/office/alkimia'
 - "kde/pim/libkleo"
 
 'FreeBSDQt5.15':


[sysadmin/ci-tooling] local-metadata: KTextEditor has severed it's dependency on KAuth, so we can restore it's Windows builds.

2022-03-09 Thread Ben Cooksley
Git commit 38521ec9142faa672be942ffe9f0419414d35588 by Ben Cooksley.
Committed on 09/03/2022 at 17:37.
Pushed by bcooksley into branch 'master'.

KTextEditor has severed it's dependency on KAuth, so we can restore it's 
Windows builds.

CCMAIL: kde-frameworks-devel@kde.org

M  +0-1local-metadata/project-ignore-rules.yaml

https://invent.kde.org/sysadmin/ci-tooling/commit/38521ec9142faa672be942ffe9f0419414d35588

diff --git a/local-metadata/project-ignore-rules.yaml 
b/local-metadata/project-ignore-rules.yaml
index e8eab67..5d7ecfd 100644
--- a/local-metadata/project-ignore-rules.yaml
+++ b/local-metadata/project-ignore-rules.yaml
@@ -16,7 +16,6 @@
 - 'frameworks/kglobalaccel'
 - 'frameworks/kded'
 - 'frameworks/kdelibs4support'
-- 'frameworks/ktexteditor'
 - 'frameworks/plasma-framework'
 - 'frameworks/krunner'
 - 'kde/applications/baloo-widgets'



Hard dependency on plasma-framework in Alkimia

2022-03-09 Thread Ben Cooksley
Hi Thomas,

Recently some changes were introduced to Frameworks which means that they
now enforce more rigorously the platforms on which they build.

This means that Plasma Framework is no longer available on Windows -
unfortunately though it looks like Alkimia has a mandatory dependency on
Plasma Framework.

Are you able to make this optional or should we disable Windows CI builds
for Alkimia (and the projects that depend on it)?

Cheers,
Ben


[sysadmin/ci-tooling] local-metadata: KRunner requires Plasma Framework, so it effectively has a hard dependency on KGlobalAccel too.

2022-03-09 Thread Ben Cooksley
Git commit 6fd7e3b8adb9789c842fdb4f3361b95db53949b3 by Ben Cooksley.
Committed on 09/03/2022 at 09:31.
Pushed by bcooksley into branch 'master'.

KRunner requires Plasma Framework, so it effectively has a hard dependency on 
KGlobalAccel too.
Disable it as well on Windows.

CCMAIL: kde-frameworks-devel@kde.org

M  +1-0local-metadata/project-ignore-rules.yaml

https://invent.kde.org/sysadmin/ci-tooling/commit/6fd7e3b8adb9789c842fdb4f3361b95db53949b3

diff --git a/local-metadata/project-ignore-rules.yaml 
b/local-metadata/project-ignore-rules.yaml
index e3c22b2..e8eab67 100644
--- a/local-metadata/project-ignore-rules.yaml
+++ b/local-metadata/project-ignore-rules.yaml
@@ -18,6 +18,7 @@
 - 'frameworks/kdelibs4support'
 - 'frameworks/ktexteditor'
 - 'frameworks/plasma-framework'
+- 'frameworks/krunner'
 - 'kde/applications/baloo-widgets'
 - 'kde/workspace/libksysguard'
 - 'kde/kdenetwork/kaccounts-integration'


[sysadmin/ci-tooling] local-metadata: Seems that lots of things require KGlobalAccel - also disable the build of Plasma Framework on Windows.

2022-03-09 Thread Ben Cooksley
Git commit 7671c8eb1f7bb2dcab74048adf03aceeafab336d by Ben Cooksley.
Committed on 09/03/2022 at 08:45.
Pushed by bcooksley into branch 'master'.

Seems that lots of things require KGlobalAccel - also disable the build of 
Plasma Framework on Windows.

CCMAIL: kde-frameworks-devel@kde.org

M  +1-0local-metadata/project-ignore-rules.yaml

https://invent.kde.org/sysadmin/ci-tooling/commit/7671c8eb1f7bb2dcab74048adf03aceeafab336d

diff --git a/local-metadata/project-ignore-rules.yaml 
b/local-metadata/project-ignore-rules.yaml
index dc0496c..e3c22b2 100644
--- a/local-metadata/project-ignore-rules.yaml
+++ b/local-metadata/project-ignore-rules.yaml
@@ -17,6 +17,7 @@
 - 'frameworks/kded'
 - 'frameworks/kdelibs4support'
 - 'frameworks/ktexteditor'
+- 'frameworks/plasma-framework'
 - 'kde/applications/baloo-widgets'
 - 'kde/workspace/libksysguard'
 - 'kde/kdenetwork/kaccounts-integration'


[sysadmin/ci-tooling] local-metadata: KTextEditor has a hard dependency on KAuth which is no longer available on Windows.

2022-03-09 Thread Ben Cooksley
Git commit be4b7627c94e4c51e43e4adf28909bdfbd9cbecc by Ben Cooksley.
Committed on 09/03/2022 at 08:02.
Pushed by bcooksley into branch 'master'.

KTextEditor has a hard dependency on KAuth which is no longer available on 
Windows.
Disable it on Windows as well.

CCMAIL: kde-frameworks-devel@kde.org
CCMAIL: kwrite-de...@kde.org

M  +1-0local-metadata/project-ignore-rules.yaml

https://invent.kde.org/sysadmin/ci-tooling/commit/be4b7627c94e4c51e43e4adf28909bdfbd9cbecc

diff --git a/local-metadata/project-ignore-rules.yaml 
b/local-metadata/project-ignore-rules.yaml
index 33dd1e3..dc0496c 100644
--- a/local-metadata/project-ignore-rules.yaml
+++ b/local-metadata/project-ignore-rules.yaml
@@ -16,6 +16,7 @@
 - 'frameworks/kglobalaccel'
 - 'frameworks/kded'
 - 'frameworks/kdelibs4support'
+- 'frameworks/ktexteditor'
 - 'kde/applications/baloo-widgets'
 - 'kde/workspace/libksysguard'
 - 'kde/kdenetwork/kaccounts-integration'


KStars on Windows

2022-03-08 Thread Ben Cooksley
Hi Jasem,

Recently some changes were introduced to Frameworks which means that they
now enforce more rigorously the platforms on which they build.

This means that KAuth is no longer available on Windows - unfortunately
though it looks like KStars has a mandatory dependency on KAuth.

Are you able to make this optional or should we disable Windows CI builds
for KStars?

Cheers,
Ben


[sysadmin/ci-tooling] local-metadata: kdelibs4support has a hard dependency on kglobalaccel (not sure why) which is no longer available on Windows.

2022-03-08 Thread Ben Cooksley
Git commit 8e43c6c798e16e45e5cc0ad8f148c0c8df6d5fd9 by Ben Cooksley.
Committed on 09/03/2022 at 07:27.
Pushed by bcooksley into branch 'master'.

kdelibs4support has a hard dependency on kglobalaccel (not sure why) which is 
no longer available on Windows.
Therefore blacklist it on Windows.

CCMAIL: kde-frameworks-devel@kde.org

M  +1-0local-metadata/project-ignore-rules.yaml

https://invent.kde.org/sysadmin/ci-tooling/commit/8e43c6c798e16e45e5cc0ad8f148c0c8df6d5fd9

diff --git a/local-metadata/project-ignore-rules.yaml 
b/local-metadata/project-ignore-rules.yaml
index 8f7bf1b..33dd1e3 100644
--- a/local-metadata/project-ignore-rules.yaml
+++ b/local-metadata/project-ignore-rules.yaml
@@ -15,6 +15,7 @@
 - 'frameworks/baloo'
 - 'frameworks/kglobalaccel'
 - 'frameworks/kded'
+- 'frameworks/kdelibs4support'
 - 'kde/applications/baloo-widgets'
 - 'kde/workspace/libksysguard'
 - 'kde/kdenetwork/kaccounts-integration'


[sysadmin/ci-tooling] local-metadata: Block frameworks/kded on Windows too.

2022-03-08 Thread Ben Cooksley
Git commit 712ed90054285fa02a67c1a6fb92b66fc440146f by Ben Cooksley.
Committed on 08/03/2022 at 17:48.
Pushed by bcooksley into branch 'master'.

Block frameworks/kded on Windows too.

CCMAIL: kde-frameworks-devel@kde.org

M  +1-0local-metadata/project-ignore-rules.yaml

https://invent.kde.org/sysadmin/ci-tooling/commit/712ed90054285fa02a67c1a6fb92b66fc440146f

diff --git a/local-metadata/project-ignore-rules.yaml 
b/local-metadata/project-ignore-rules.yaml
index 2f93f05..1430b83 100644
--- a/local-metadata/project-ignore-rules.yaml
+++ b/local-metadata/project-ignore-rules.yaml
@@ -14,6 +14,7 @@
 - 'frameworks/kactivities-stats'
 - 'frameworks/baloo'
 - 'frameworks/kglobalaccel'
+- 'frameworks/kded'
 - 'kde/applications/baloo-widgets'
 - 'kde/workspace/libksysguard'
 - 'kde/kdenetwork/kaccounts-integration'


Re: CI Repairs

2022-03-08 Thread Ben Cooksley
On Tue, Mar 8, 2022 at 11:20 PM Volker Krause  wrote:

> On Dienstag, 8. März 2022 08:54:38 CET Ben Cooksley wrote:
> > This evening i've repaired several issues that were causing builds to
> fail
> > on the main Jenkins CI system. This includes a broken Windows builder
> > (causing Windows builds to periodically fail) and a hung FreeBSD builder
> > (which was consuming half a CPU and preventing KWin CI jobs from
> completing)
>
> Thank you! Would that also explain the problems we are seeing with the
> FreeBSD
> seed job?
>

Unfortunately no.

Most of the issues i've seen with the seed jobs on FreeBSD/Windows have
been due to CMake erroring out as a consequence of the platform not being
supported.
I've been fixing those as we hit them (by disabling the build of that
project on that platform)

Looks like FreeBSD passes now.


>
> > Replacement runs have been initiated for all projects.
> >
> > So far all appears well, however a number of projects appear to have CI
> > regressions on one or more platforms due to:
> > - Use of exceptions (KMail)
>
> For this I'm not finding an explanation, it started after a completely
> unrelated merge commit and the exception using code is in an Akonadi
> header
> that is used all over the place.
>
> > - Use of an ECM version that does not exist (print-manager)
>
> Fixed.
>
> > - Use of C++ functionality that is not enabled (okular on Windows)
>
> https://invent.kde.org/graphics/okular/-/merge_requests/582
>
> > - Something to do with qobject_cast (akonadiconsole)
>
> We had similar issues in other modules over the past two weeks or so due
> to
> the include install layout changes not being propagated fully yet. That's
> what
> made me initially look at the FreeBSD seed job.
>
> Regards,
> Volker


Thanks,
Ben


[sysadmin/ci-tooling] local-metadata: Block KGlobalAccel on Windows too.

2022-03-08 Thread Ben Cooksley
Git commit 8d376b50b67f572dab60519a9ce7b3ba3a9f744c by Ben Cooksley.
Committed on 08/03/2022 at 17:13.
Pushed by bcooksley into branch 'master'.

Block KGlobalAccel on Windows too.

CCMAIL: kde-frameworks-devel@kde.org

M  +1-0local-metadata/project-ignore-rules.yaml

https://invent.kde.org/sysadmin/ci-tooling/commit/8d376b50b67f572dab60519a9ce7b3ba3a9f744c

diff --git a/local-metadata/project-ignore-rules.yaml 
b/local-metadata/project-ignore-rules.yaml
index f970914..2f93f05 100644
--- a/local-metadata/project-ignore-rules.yaml
+++ b/local-metadata/project-ignore-rules.yaml
@@ -13,6 +13,7 @@
 - 'frameworks/kwayland'
 - 'frameworks/kactivities-stats'
 - 'frameworks/baloo'
+- 'frameworks/kglobalaccel'
 - 'kde/applications/baloo-widgets'
 - 'kde/workspace/libksysguard'
 - 'kde/kdenetwork/kaccounts-integration'


[sysadmin/ci-tooling] local-metadata: Block KAuth on Windows.

2022-03-08 Thread Ben Cooksley
Git commit 1322a5f4ae7335bf31a288189a455dff4c34c83c by Ben Cooksley.
Committed on 08/03/2022 at 09:36.
Pushed by bcooksley into branch 'master'.

Block KAuth on Windows.

CCMAIL: kde-frameworks-devel@kde.org

M  +1-0local-metadata/project-ignore-rules.yaml

https://invent.kde.org/sysadmin/ci-tooling/commit/1322a5f4ae7335bf31a288189a455dff4c34c83c

diff --git a/local-metadata/project-ignore-rules.yaml 
b/local-metadata/project-ignore-rules.yaml
index 36b34c4..f970914 100644
--- a/local-metadata/project-ignore-rules.yaml
+++ b/local-metadata/project-ignore-rules.yaml
@@ -7,6 +7,7 @@
 - 'frameworks/networkmanager-qt'
 - 'frameworks/modemmanager-qt'
 - 'frameworks/bluez-qt'
+- 'frameworks/kauth'
 - 'frameworks/kdesu'
 - 'frameworks/kpty'
 - 'frameworks/kwayland'


[sysadmin/ci-tooling] local-metadata: Ensure we do not use frameworks/bluez-qt on FreeBSD

2022-03-08 Thread Ben Cooksley
Git commit fc4c56fed4466c1adf26b570b000edb1791e5e43 by Ben Cooksley.
Committed on 08/03/2022 at 08:59.
Pushed by bcooksley into branch 'master'.

Ensure we do not use frameworks/bluez-qt on FreeBSD

CCMAIL: kde-frameworks-devel@kde.org

M  +1-0local-metadata/project-ignore-rules.yaml

https://invent.kde.org/sysadmin/ci-tooling/commit/fc4c56fed4466c1adf26b570b000edb1791e5e43

diff --git a/local-metadata/project-ignore-rules.yaml 
b/local-metadata/project-ignore-rules.yaml
index aa3ac42..36b34c4 100644
--- a/local-metadata/project-ignore-rules.yaml
+++ b/local-metadata/project-ignore-rules.yaml
@@ -22,6 +22,7 @@
 - 'kdesupport/polkit-qt-1'
 - 'frameworks/networkmanager-qt'
 - 'frameworks/modemmanager-qt'
+- 'frameworks/bluez-qt'
 - 'kde/workspace/plymouth-kcm'
 - 'kde/workspace/plasma-nm'
 - 'kde/workspace/plasma-vault'


Re: Critical Denial of Service bugs in Discover

2022-03-08 Thread Ben Cooksley
On Mon, Mar 7, 2022 at 1:16 PM Aleix Pol  wrote:

>
> On Sat, Mar 5, 2022 at 8:36 AM Ben Cooksley  wrote:
>
>> On Fri, Mar 4, 2022 at 12:49 AM Aleix Pol  wrote:
>>
>>> I'd say wireshark is too low level for what the problem is here. We are
>>> talking about having too many HTTP requests for specific URLs.
>>>
>>
>> Correct, I guess the difference in our approaches comes from a "before
>> release" to a "monitor after release" angle to things.
>> I'd like to see increased scrutiny during the development process as well
>> to make sure that we release code that operates properly from Day 1.
>>
>
> A way to do this could be using commit hooks that do not allow to reach
> certain services. (which we discussed in private chat).
> We could also analyse at cmake time the knsrc files we install, but this
> has a very limited and specific scope.
>

I've now applied two checks as part of the hooks which will hopefully catch
anything new being introduced.
We still need to ensure that anything pre-existing is sorted out of course.


>
>
>> I can think two main measures:
>>> - Trigger an alarm (an e-mail notification?) if there's a specific
>>> UserAgent that has a specific portion of the queries we have in a specific
>>> day in the services we care about.
>>> - Offer plots to see how queries by UserAgent evolve over the last
>>> couple of months (or couple of years).
>>>
>>
>> At the moment our ability to analyse our logs is somewhat limited by our
>> Privacy Policy - https://kde.org/privacypolicy/
>> Currently we don't have any provision for long term storage of
>> this information even on an aggregated basis - so we would need to update
>> this first.
>>
>
> Hopefully the NDA should help here and it doesn't seem all that far away.
> I know Neofytos and Ade have been working on it lately.
>

The privacy policy will still need to be updated, but that can form part of
the puzzle yes.


>
> The second issue there is that we are transitioning users to contact a CDN
>> based endpoint (which is substantially more scalable).
>> This does mean we lose visibility on data such as User Agents and the
>> URLs being impacted though as we only get aggregated data unless we ask for
>> raw logs - which makes implementing something like what you've described
>> much harder.
>>
>
> That does seem like a stopper. Still, it seems like it's not that big of a
> problem when there is a CDN, so we better worry about the other cases.
>

We should still be reasonable to the CDN of course, but it makes it much
more managable yes.


>
> Aleix
>

Cheers,
Ben


[sysadmin/repo-management] hooks: Implement two additional checks as part of our hooks:

2022-03-08 Thread Ben Cooksley
Git commit 919f7163102835d46c81593251fd0689fea71640 by Ben Cooksley.
Committed on 08/03/2022 at 08:13.
Pushed by bcooksley into branch 'master'.

Implement two additional checks as part of our hooks:

1) Require that all *.knsrc file changes be reviewed by a Sysadmin if landing 
in a non-work branch
2) Alert Sysadmin if anyone mentions download.kde.org or files.kde.org in the 
text of their code.

CCMAIL: kde-frameworks-devel@kde.org
CCMAIL: plasma-de...@kde.org

M  +14   -0hooks/hooklib.py
M  +16   -2hooks/invent.pre-receive

https://invent.kde.org/sysadmin/repo-management/commit/919f7163102835d46c81593251fd0689fea71640

diff --git a/hooks/hooklib.py b/hooks/hooklib.py
index 062b0e3..df04d96 100644
--- a/hooks/hooklib.py
+++ b/hooks/hooklib.py
@@ -706,6 +706,10 @@ class CommitEmailNotifier:
 if self.checker and (self.checker.license_problem or 
self.checker.commit_problem):
 cc_addresses.append( self.commit.committer_email )
 
+# Add Sysadmin if infrastructure problems have been found
+if self.checker and self.checker.infra_problem):
+cc_addresses.append( 'sysad...@kde.org' )
+
 if self.keywords['email_gui']:
 cc_addresses.append( 'kde-doc-engl...@kde.org' )
 
@@ -1002,6 +1006,10 @@ class CommitChecker:
 def commit_problem(self):
 return self._commit_problem
 
+@property
+def infra_problem(self):
+return self._infra_problem
+
 @property
 def commit_notes(self):
 return self._commit_notes
@@ -1219,6 +1227,7 @@ class CommitChecker:
 
 # Initialise
 self._license_problem = False
+self._infra_problem = False
 self._commit_problem = False
 self._commit_notes = defaultdict(list)
 
@@ -1261,6 +1270,11 @@ class CommitChecker:
 self._commit_notes[filename].append(note)
 self._commit_problem = True
 
+# Check for references to KDE.org infrastructure which are being 
added without permission
+if re.search(".*(download|files)\.kde\.org.*", line) and 
line.startswith("+"):
+self._commit_notes[filename].append( "[INFRASTRUCTURE]" )
+self._infra_problem = True
+
 # Store the diff
 filediff.append(line)
 
diff --git a/hooks/invent.pre-receive b/hooks/invent.pre-receive
index 75dda6a..537d104 100755
--- a/hooks/invent.pre-receive
+++ b/hooks/invent.pre-receive
@@ -99,6 +99,9 @@ translation_file_rules = [
 '^poqm/.*'
 ]
 
+# These users are authorised to review changes to *.knsrc files
+knsrc_reviewers = ['bcooksley', 'bshah', 'nalvarez']
+
 # For these users we always skip notifications
 notification_user_exceptions = ["scripty"]
 
@@ -355,8 +358,8 @@ for changeset in repository.changesets.values():
 if not os.path.exists(repository_config + "/skip-author-email-checks"):
 auditor.audit_emails_in_metadata( changeset, email_domains_blocked )
 
-   # Depending on who we are, we may also need to check to see whether we are 
changing translations that have been mirrored into the repository
-   # Only specific users are allowed to change these as they are maintained by 
scripty
+# Depending on who we are, we may also need to check to see whether we are 
changing translations that have been mirrored into the repository
+# Only specific users are allowed to change these as they are maintained 
by scripty
 if not os.path.exists(repository_config + "/skip-translation-protections") 
and push_user not in translation_mirror_maintainers:
 # Review each commit for changes to files...
 for commit in changeset.commits.values():
@@ -368,6 +371,17 @@ for changeset in repository.changesets.values():
 if re.match(restriction, filename):
 auditor.log_failure(commit.sha1, "Translations 
maintained separately: " + filename)
 
+# Depending on who we are, we may also need to check to see whether we are 
impacting on a KNSRC file
+# Only specific users are allowed to change these as they can have 
substantial infrastructure implications
+if not os.path.exists(repository_config + "/skip-knsrc-protections") and 
push_user not in knsrc_reviewers and changeset.ref_type is not 
RefType.WorkBranch:
+# Review each commit for changes to files...
+for commit in changeset.commits.values():
+# Now check each file that was changed in that commit...
+for filename in commit.files_changed:
+# Did we change a KNSRC file?
+if re.match(".*knsrc.*", filename):
+auditor.log_failure(commit.sha1, "KNewStuff configuration 
must be Sysadmin reviewed: " + filename)
+
 # Did we have any commit audit failures?
 if auditor.audit_failed:
 print("Push declined - commits failed audit")


CI Repairs

2022-03-07 Thread Ben Cooksley
Hi all,

This evening i've repaired several issues that were causing builds to fail
on the main Jenkins CI system. This includes a broken Windows builder
(causing Windows builds to periodically fail) and a hung FreeBSD builder
(which was consuming half a CPU and preventing KWin CI jobs from completing)

Replacement runs have been initiated for all projects.

So far all appears well, however a number of projects appear to have CI
regressions on one or more platforms due to:
- Use of exceptions (KMail)
- Use of an ECM version that does not exist (print-manager)
- Use of C++ functionality that is not enabled (okular on Windows)
- Something to do with qobject_cast (akonadiconsole)

If developers could please fix their breakages that would be appreciated.

Thanks,
Ben


Re: Critical Denial of Service bugs in Discover

2022-03-04 Thread Ben Cooksley
On Fri, Mar 4, 2022 at 12:49 AM Aleix Pol  wrote:

> I'd say wireshark is too low level for what the problem is here. We are
> talking about having too many HTTP requests for specific URLs.
>

Correct, I guess the difference in our approaches comes from a "before
release" to a "monitor after release" angle to things.
I'd like to see increased scrutiny during the development process as well
to make sure that we release code that operates properly from Day 1.


>
> I can think two main measures:
> - Trigger an alarm (an e-mail notification?) if there's a specific
> UserAgent that has a specific portion of the queries we have in a specific
> day in the services we care about.
> - Offer plots to see how queries by UserAgent evolve over the last couple
> of months (or couple of years).
>

At the moment our ability to analyse our logs is somewhat limited by our
Privacy Policy - https://kde.org/privacypolicy/
Currently we don't have any provision for long term storage of
this information even on an aggregated basis - so we would need to update
this first.

The second issue there is that we are transitioning users to contact a CDN
based endpoint (which is substantially more scalable).
This does mean we lose visibility on data such as User Agents and the URLs
being impacted though as we only get aggregated data unless we ask for
raw logs - which makes implementing something like what you've described
much harder.


>
> Aleix
>

Cheers,
Ben


>
>
> On Thu, Mar 3, 2022 at 9:59 AM Ben Cooksley  wrote:
>
>> On Thu, Mar 3, 2022 at 8:41 AM Aleix Pol  wrote:
>>
>>> (dropping the distros list)
>>>
>>> @sysadmin have you been able to look into any tools we devs can have to
>>> make sure this situation doesn't repeat in the future?
>>>
>>
>> Hi Aleix,
>>
>> To be honest i've been struggling to think of ways that we could detect
>> this on the server side prior to it becoming a massive issue.
>> By the time an issue is evident server side it is usually much too late.
>>
>> The main tools i'd usually recommend would be the standard tools you
>> would use for monitoring the network activity of any application - such as
>> Wireshark.
>>
>> Is there something you were thinking of specifically in terms of us being
>> able to provide?
>>
>> Thanks,
>> Ben
>>
>>
>>>
>>> Aleix
>>>
>>> On Thu, Feb 10, 2022 at 1:10 PM Aleix Pol  wrote:
>>>
>>>> On Thu, Feb 10, 2022 at 11:05 AM Ben Cooksley 
>>>> wrote:
>>>> >
>>>> >
>>>> >
>>>> > On Thu, Feb 10, 2022 at 8:20 AM Aleix Pol  wrote:
>>>> >>
>>>> >> [Snip]
>>>> >>
>>>> >> We still haven't discussed here is how to prevent this problem from
>>>> >> happening again.
>>>> >>
>>>> >> If we don't have information about what is happening, we cannot fix
>>>> problems.
>>>> >
>>>> >
>>>> > Part of the issue here is that the problem only came to Sysadmin
>>>> attention very recently, when the system ran out of disk space as a result
>>>> of growing log files.
>>>> > It was at that point we realised we had a serious problem.
>>>> >
>>>> > Prior to that the system load hadn't climbed to dangerous levels (>
>>>> number of CPU cores) and Apache was keeping up with the traffic, so none of
>>>> our other monitoring was tripped.
>>>> >
>>>> > If you have any thoughts on what sort of information you are thinking
>>>> of that would be helpful.
>>>>
>>>> We could have plots of the amount of queries we get with a KNewStuff/*
>>>> user-agent over time and their distribution.
>>>>
>>>> > It would definitely be helpful though to know when new software is
>>>> going to be released that will be interacting with the servers as we will
>>>> then be able to monitor for abnormalities.
>>>>
>>>> We make big announcements of every Plasma release... (?)
>>>>
>>>> >> Is there anything that could be done in this front? The issue here
>>>> >> could have been addressed months ago, we just never knew it was
>>>> >> happening.
>>>> >
>>>> >
>>>> > One possibility that did occur to me today would be for us to
>>>> integrate some kind of killswitch that our applications would check on
>>>> first initialisation of functionality that talks to KDE.org servers.
>>>> > This would allow us to disable the functionality in question on user
>>>> systems.
>>>> >
>>>> > The check would only be done on first initialization to keep load
>>>> low, while still ensuring all users eventually are affected by the
>>>> killswitch (as they will eventually need to logout/reboot for some reason
>>>> or another).
>>>> >
>>>> > The killswitch would probably work best if it had some kind of
>>>> version check in it so we could specify which versions are disabled.
>>>> > That would allow for subsequent updates - once delivered by
>>>> distributions - to restore the functionality (while leaving it disabled for
>>>> those who haven't updated).
>>>>
>>>> The file we are serving here effectively is the kill switch to all of
>>>> KNewStuff.
>>>>
>>>> Aleix
>>>>
>>>


Re: Critical Denial of Service bugs in Discover

2022-03-03 Thread Ben Cooksley
On Thu, Mar 3, 2022 at 8:41 AM Aleix Pol  wrote:

> (dropping the distros list)
>
> @sysadmin have you been able to look into any tools we devs can have to
> make sure this situation doesn't repeat in the future?
>

Hi Aleix,

To be honest i've been struggling to think of ways that we could detect
this on the server side prior to it becoming a massive issue.
By the time an issue is evident server side it is usually much too late.

The main tools i'd usually recommend would be the standard tools you would
use for monitoring the network activity of any application - such as
Wireshark.

Is there something you were thinking of specifically in terms of us being
able to provide?

Thanks,
Ben


>
> Aleix
>
> On Thu, Feb 10, 2022 at 1:10 PM Aleix Pol  wrote:
>
>> On Thu, Feb 10, 2022 at 11:05 AM Ben Cooksley  wrote:
>> >
>> >
>> >
>> > On Thu, Feb 10, 2022 at 8:20 AM Aleix Pol  wrote:
>> >>
>> >> [Snip]
>> >>
>> >> We still haven't discussed here is how to prevent this problem from
>> >> happening again.
>> >>
>> >> If we don't have information about what is happening, we cannot fix
>> problems.
>> >
>> >
>> > Part of the issue here is that the problem only came to Sysadmin
>> attention very recently, when the system ran out of disk space as a result
>> of growing log files.
>> > It was at that point we realised we had a serious problem.
>> >
>> > Prior to that the system load hadn't climbed to dangerous levels (>
>> number of CPU cores) and Apache was keeping up with the traffic, so none of
>> our other monitoring was tripped.
>> >
>> > If you have any thoughts on what sort of information you are thinking
>> of that would be helpful.
>>
>> We could have plots of the amount of queries we get with a KNewStuff/*
>> user-agent over time and their distribution.
>>
>> > It would definitely be helpful though to know when new software is
>> going to be released that will be interacting with the servers as we will
>> then be able to monitor for abnormalities.
>>
>> We make big announcements of every Plasma release... (?)
>>
>> >> Is there anything that could be done in this front? The issue here
>> >> could have been addressed months ago, we just never knew it was
>> >> happening.
>> >
>> >
>> > One possibility that did occur to me today would be for us to integrate
>> some kind of killswitch that our applications would check on first
>> initialisation of functionality that talks to KDE.org servers.
>> > This would allow us to disable the functionality in question on user
>> systems.
>> >
>> > The check would only be done on first initialization to keep load low,
>> while still ensuring all users eventually are affected by the killswitch
>> (as they will eventually need to logout/reboot for some reason or another).
>> >
>> > The killswitch would probably work best if it had some kind of version
>> check in it so we could specify which versions are disabled.
>> > That would allow for subsequent updates - once delivered by
>> distributions - to restore the functionality (while leaving it disabled for
>> those who haven't updated).
>>
>> The file we are serving here effectively is the kill switch to all of
>> KNewStuff.
>>
>> Aleix
>>
>


Re: Critical Denial of Service bugs in Discover

2022-02-25 Thread Ben Cooksley
On Fri, Feb 25, 2022 at 10:09 PM Harald Sitter  wrote:

> On Mon, Feb 21, 2022 at 11:05 AM Ben Cooksley  wrote:
> >
> > On Mon, Feb 21, 2022 at 10:01 PM Harald Sitter  wrote:
> >>
> >> On Thu, Feb 10, 2022 at 1:11 PM Aleix Pol  wrote:
> >> >
> >> > On Thu, Feb 10, 2022 at 11:05 AM Ben Cooksley 
> wrote:
> >> > >
> >> > >
> >> > >
> >> > > On Thu, Feb 10, 2022 at 8:20 AM Aleix Pol  wrote:
> >> > >>
> >> > >> [Snip]
> >> > >>
> >> > >> We still haven't discussed here is how to prevent this problem from
> >> > >> happening again.
> >> > >>
> >> > >> If we don't have information about what is happening, we cannot
> fix problems.
> >> > >
> >> > >
> >> > > Part of the issue here is that the problem only came to Sysadmin
> attention very recently, when the system ran out of disk space as a result
> of growing log files.
> >> > > It was at that point we realised we had a serious problem.
> >> > >
> >> > > Prior to that the system load hadn't climbed to dangerous levels (>
> number of CPU cores) and Apache was keeping up with the traffic, so none of
> our other monitoring was tripped.
> >> > >
> >> > > If you have any thoughts on what sort of information you are
> thinking of that would be helpful.
> >> >
> >> > We could have plots of the amount of queries we get with a KNewStuff/*
> >> > user-agent over time and their distribution.
> >> >
> >> > > It would definitely be helpful though to know when new software is
> going to be released that will be interacting with the servers as we will
> then be able to monitor for abnormalities.
> >> >
> >> > We make big announcements of every Plasma release... (?)
> >> >
> >> > >> Is there anything that could be done in this front? The issue here
> >> > >> could have been addressed months ago, we just never knew it was
> >> > >> happening.
> >> > >
> >> > >
> >> > > One possibility that did occur to me today would be for us to
> integrate some kind of killswitch that our applications would check on
> first initialisation of functionality that talks to KDE.org servers.
> >> > > This would allow us to disable the functionality in question on
> user systems.
> >> > >
> >> > > The check would only be done on first initialization to keep load
> low, while still ensuring all users eventually are affected by the
> killswitch (as they will eventually need to logout/reboot for some reason
> or another).
> >> > >
> >> > > The killswitch would probably work best if it had some kind of
> version check in it so we could specify which versions are disabled.
> >> > > That would allow for subsequent updates - once delivered by
> distributions - to restore the functionality (while leaving it disabled for
> those who haven't updated).
> >> >
> >> > The file we are serving here effectively is the kill switch to all of
> KNewStuff.
> >>
> >> I'm a bit late to the party but for future reference I think this
> >> was/is an architectural scaling problem on the server side as much as
> >> a bug on the client. If just https load is the problem then the
> >> "hotfix" is to use a HTTP load balancer until fixes make it into the
> >> clients, killing the clients is like the last resort ever. I'm sure we
> >> have the money to afford a bunch of cloud nodes serving as selective
> >> proxy caches for a month to balance out the KNS load on the canonical
> >> server.
> >
> >
> > This was a multi-fold bug:
> >
> > 1) Sysadmin allowing a compatibility endpoint to remain alive for years
> after we told people to stop using it and to use the new one (which is on a
> CDN and which would have handled this whole issue much better)
> > 2) Developers writing code to talk to KDE.org infrastructure without
> consulting Sysadmin, especially where it deviated from previously
> established patterns.
> >
> > In terms of scalability I disagree - the system is not being used here
> in a manner for which it was not designed.
> >
> > This system is intended to serve downloads of KDE software and
> associated data files to distributors and end users. These are actions that
> are expected to:
> > a) Be 

Re: Critical Denial of Service bugs in Discover

2022-02-21 Thread Ben Cooksley
On Mon, Feb 21, 2022 at 10:01 PM Harald Sitter  wrote:

> On Thu, Feb 10, 2022 at 1:11 PM Aleix Pol  wrote:
> >
> > On Thu, Feb 10, 2022 at 11:05 AM Ben Cooksley  wrote:
> > >
> > >
> > >
> > > On Thu, Feb 10, 2022 at 8:20 AM Aleix Pol  wrote:
> > >>
> > >> [Snip]
> > >>
> > >> We still haven't discussed here is how to prevent this problem from
> > >> happening again.
> > >>
> > >> If we don't have information about what is happening, we cannot fix
> problems.
> > >
> > >
> > > Part of the issue here is that the problem only came to Sysadmin
> attention very recently, when the system ran out of disk space as a result
> of growing log files.
> > > It was at that point we realised we had a serious problem.
> > >
> > > Prior to that the system load hadn't climbed to dangerous levels (>
> number of CPU cores) and Apache was keeping up with the traffic, so none of
> our other monitoring was tripped.
> > >
> > > If you have any thoughts on what sort of information you are thinking
> of that would be helpful.
> >
> > We could have plots of the amount of queries we get with a KNewStuff/*
> > user-agent over time and their distribution.
> >
> > > It would definitely be helpful though to know when new software is
> going to be released that will be interacting with the servers as we will
> then be able to monitor for abnormalities.
> >
> > We make big announcements of every Plasma release... (?)
> >
> > >> Is there anything that could be done in this front? The issue here
> > >> could have been addressed months ago, we just never knew it was
> > >> happening.
> > >
> > >
> > > One possibility that did occur to me today would be for us to
> integrate some kind of killswitch that our applications would check on
> first initialisation of functionality that talks to KDE.org servers.
> > > This would allow us to disable the functionality in question on user
> systems.
> > >
> > > The check would only be done on first initialization to keep load low,
> while still ensuring all users eventually are affected by the killswitch
> (as they will eventually need to logout/reboot for some reason or another).
> > >
> > > The killswitch would probably work best if it had some kind of version
> check in it so we could specify which versions are disabled.
> > > That would allow for subsequent updates - once delivered by
> distributions - to restore the functionality (while leaving it disabled for
> those who haven't updated).
> >
> > The file we are serving here effectively is the kill switch to all of
> KNewStuff.
>
> I'm a bit late to the party but for future reference I think this
> was/is an architectural scaling problem on the server side as much as
> a bug on the client. If just https load is the problem then the
> "hotfix" is to use a HTTP load balancer until fixes make it into the
> clients, killing the clients is like the last resort ever. I'm sure we
> have the money to afford a bunch of cloud nodes serving as selective
> proxy caches for a month to balance out the KNS load on the canonical
> server.
>

This was a multi-fold bug:

1) Sysadmin allowing a compatibility endpoint to remain alive for years
after we told people to stop using it and to use the new one (which is on a
CDN and which would have handled this whole issue much better)
2) Developers writing code to talk to KDE.org infrastructure without
consulting Sysadmin, especially where it deviated from previously
established patterns.

In terms of scalability I disagree - the system is not being used here in a
manner for which it was not designed.

This system is intended to serve downloads of KDE software and associated
data files to distributors and end users. These are actions that are
expected to:
a) Be undertaken on an infrequent basis; and
b) Be undertaken as a result of user initiated action (such as clicking a
download link)

It was never intended to be used to serve configuration data files to end
user systems. We have autoconfig.kde.org for that.

The system in question is handling the load extremely well and far beyond
my expectations - it is fairly unfathomable that download.kde.org and
files.kde.org would receive traffic on the order of 500-600 requests per
second.
During this time the highest load I have seen has been around 8 - and
despite this being uncomfortably busy it has not fallen over or dropped the
ball for both it's BAU activity as well as the abuse it has taken.
(My extreme level of concern on this matter has been because I knew th

Re: Dropping dead(?) Python bindings generation code?

2022-02-12 Thread Ben Cooksley
On Sun, Feb 13, 2022 at 6:36 AM Friedrich W. H. Kossebau 
wrote:

> Hi,
>

Hi there,


>
> trying to ensure some changes do not break the Python binding generation,
> I
> actually tried to activate that, but found at least on current openSUSE TW
> there seem to be no longer any working dependencies. Also the openSUSE TW
> packages of the KF modules seem to also be build without bindings, for the
> samples I checked.
>

> Then I found that on both gitlab & jenkins CI the binding generation is
> also
> skipt (at least for KCoreAddons on all platforms, but seems also
> everywhere
> else).
> Some related commit removing the support talks about "deterministic"
> builds
> though:
> https://invent.kde.org/sysadmin/ci-tooling/-/commit/
> 6a92fdf747990d2e074e92b2bdc224efc9b08740
>

Not sure if SUSE hit the same issues we did, but the build of these on KDE
CI has been disabled for a long time because the Python bindings did not
build reliably.
This was caused by dependency sequencing issues from my understanding
within CMake.

Consequently we would get builds falling over periodically for no reason
other than the timing within the build itself.
This usually made it pretty difficult for Dependency Builds to complete as
at least one Framework would invariably fall over.

Without that being fixed, we would continue to have the Python bindings
support disabled on the CI system regardless of anything else being fixed.


>
> Then on #kde-devel I was told that"pyqt5 5.15.6 + sip4" do no more go
> together, referencing
> https://www.riverbankcomputing.com/pipermail/pyqt/2021-November/044346.html
> :
> > It wasn't an intentional breakage but it's not something I'm going to
> rush
> to fix.
>
> Who feels in charge of the Python binding support? Is there a chance
> someone
> will work on this soonish?
>
> Or could we drop it now, and save everyone the cmake warning messages they
> cannot fix and also the bad feeling to change things that might break
> binding
> generation support even further?
>
> It was suggested that "the only reasonable way forward it to port to
> modern
> sip" "but that requires an almost full rewrite".
> Which sounds as if any future system will need a rework of ECM's
> PythonModuleGeneration as well, thus keeping the current CMake code in KF
> modules around in chance they might get used again as they are in the
> future
> would not make sense.
>
> Reference removal of the Python binding generation support up as
> https://invent.kde.org/frameworks/kcoreaddons/-/merge_requests/198
> to serve as example for the discussion.
>
> Cheers
> Friedrich
>

Cheers,
Ben


Re: Critical Denial of Service bugs in Discover

2022-02-12 Thread Ben Cooksley
On Fri, Feb 11, 2022 at 10:22 AM Fabian Vogt  wrote:

> Moin,
>
> Am Sonntag, 6. Februar 2022, 21:54:13 CET schrieb Fabian Vogt:
> > Am Sonntag, 6. Februar 2022, 19:27:11 CET schrieb Ben Cooksley:
> > > On Sun, Feb 6, 2022 at 1:07 PM Fabian Vogt 
> wrote:
> > > > The first URL is used by kfontinst.knsrc from plasma-workspace:
> > > > ProvidersUrl=
> https://distribute.kde.org/khotnewstuff/fonts-providers.xml
> > > >
> > > > The second URL is used by multiple knsrc files in my VM:
> > > > aurorae.knsrc:ProvidersUrl=
> https://download.kde.org/ocs/providers.xml
> > > > comic.knsrc:ProvidersUrl=https://download.kde.org/ocs/providers.xml
> > > > kwineffect.knsrc:ProvidersUrl=
> https://download.kde.org/ocs/providers.xml
> > > > kwinscripts.knsrc:ProvidersUrl=
> https://download.kde.org/ocs/providers.xml
> > > > kwinswitcher.knsrc:ProvidersUrl=
> https://download.kde.org/ocs/providers.xml
> > > > wallpaperplugin.knsrc:ProvidersUrl=
> > > > https://download.kde.org/ocs/providers.xml
> > >
> > > This makes me incredibly sad. We had a push to eliminate all usage of
> the
> > > legacy download.kde.org endpoint many years ago...
> > > I have now resolved the majority of these - if distributions could
> please
> > > pick up those patches that would be appreciated.
> > >
> > > Please note that I have now terminated the support on the server that
> was
> > > making these legacy endpoints work, so those patches are necessary to
> > > restore functionality.
> > ...
> > It's also possible that the requests aren't actually caused by Discover
> at all,
> > but just something which imitates it in a DDoS attack. In that case we
> couldn't
> > do anything on the client-side anyway. I don't think this is very
> likely, but
> > until the issue was reproduced with disover it's a possibility.
>
> I think I have a plausible explanation for what could've caused this.
> While testing a MR for the notifier, I noticed odd behaviour: It always ran
> plasma-discover-update twice!
> https://invent.kde.org/plasma/discover/-/merge_requests/254#note_394584
>
> The reason for that is that after the update process finishes, the notifier
> realizes that it's idle again and if updates are available, it will
> immediately
> trigger another update after the 15min idle time. Now here's the catch: If
> the
> system has already been idle for >=15min (which is very likely at that
> point),
> the idle timeout will immediately fire! This process repeats unlimited and
> without delay, until the system is no longer idle or there aren't updates
> available anymore. Here I have plasma-discover-update running approx. every
> second, which amounts to ~4 req/s to download.kde.org.
>
> This is mostly mitigated by the introduction of the 3h delay between
> updates
> by d607e0c6f9, but not entirely. The check is only effective after the
> second
> iteration, which is what I observed in my testing. (One of the commits in
> my MR
> should address that as well.)
>
> One of the conditions for running into this bug is that after the automatic
> updater ran, there still have to be updates available to trigger the next
> run.
> Initially I thought that this can mostly happen if updates fail to
> download or
> install, this is unfortunately not true. The notifier by default counts all
> available updates, but the updater only installs offline updates. So if
> there
> is even a single non-offline update available, the loop continues.
>

Continues infinitely I assume?


>
> So this probably affected a lot of users who enabled automatic
> installation of
> updates :-/
>

Do we know if any distributions flipped that switch?


>
> Cheers,
> Fabian
>
>
Regards,
Ben


Re: Critical Denial of Service bugs in Discover

2022-02-10 Thread Ben Cooksley
On Thu, Feb 10, 2022 at 8:20 AM Aleix Pol  wrote:

> [Snip]
>
> We still haven't discussed here is how to prevent this problem from
> happening again.
>
> If we don't have information about what is happening, we cannot fix
> problems.
>

Part of the issue here is that the problem only came to Sysadmin attention
very recently, when the system ran out of disk space as a result of growing
log files.
It was at that point we realised we had a serious problem.

Prior to that the system load hadn't climbed to dangerous levels (> number
of CPU cores) and Apache was keeping up with the traffic, so none of our
other monitoring was tripped.

If you have any thoughts on what sort of information you are thinking of
that would be helpful.

It would definitely be helpful though to know when new software is going to
be released that will be interacting with the servers as we will then be
able to monitor for abnormalities.
(This would have allowed us to advise on the User-Agent stuff prior to
September, as well as point out potential issues with caching


> Is there anything that could be done in this front? The issue here
> could have been addressed months ago, we just never knew it was
> happening.


One possibility that did occur to me today would be for us to integrate
some kind of killswitch that our applications would check on first
initialisation of functionality that talks to KDE.org servers.
This would allow us to disable the functionality in question on user
systems.

The check would only be done on first initialization to keep load low,
while still ensuring all users eventually are affected by the killswitch
(as they will eventually need to logout/reboot for some reason or another).

The killswitch would probably work best if it had some kind of version
check in it so we could specify which versions are disabled.
That would allow for subsequent updates - once delivered by distributions -
to restore the functionality (while leaving it disabled for those who
haven't updated).


>
> Aleix
>

Thanks,
Ben


Re: Critical Denial of Service bugs in Discover

2022-02-08 Thread Ben Cooksley
On Tue, Feb 8, 2022 at 4:24 AM Aleix Pol  wrote:

> On Sat, Feb 5, 2022 at 10:16 PM Ben Cooksley  wrote:
> >
> > Hi all,
> >
> > Over the past week or so Sysadmin has been dealing with an extremely
> high volume of traffic directed towards both download.kde.org and
> distribute.kde.org.
> >
> > This traffic volume is curious in so far that it is directed at two
> paths specifically:
> > - distribute.kde.org/khotnewstuff/fonts-providers.xml
> > - download.kde.org/ocs/providers.xml
> >
> > The first path is an "internal only" host which we were redirecting a
> legacy path to prior to the resource being relocated to cdn.kde.org. The
> second path has been legacy for numerous years now (more than 5) and is
> replaced by autoconfig.kde.org.
> > It is of extreme concern that these paths are still in use - especially
> the ocs/providers.xml one.
> >
> > The volume of traffic has reached an extent that to prevent the server
> disk filling up we have had to disable logging for those two sites. Whilst
> dependent on the time of day the server is currently dealing with the
> current volume of requests, which is far outside normal specifications:
> >
> > 555 requests/sec - 4.5 MB/second - 8.3 kB/request - .739199
> ms/request
> >
> > Analysis of a fragment of logs (comprising just a few minutes of
> traffic) reveals the following:
> >
> >  63 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-"
> "KNewStuff/5.89.0-discoverupdate/5.23.5"
> >  64 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-"
> "KNewStuff/5.89.0-discoverupdate/5.23.4"
> > 104 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-"
> "KNewStuff/5.90.0-discoverupdate/5.23.90"
> > 105 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-"
> "KNewStuff/5.88.0-discoverupdate/5.23.5"
> >1169 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-"
> "KNewStuff/5.86.0-plasma-discover-update/"
> >1256 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-"
> "KNewStuff/5.90.0-discoverupdate/5.23.5"
> >2905 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-" "Mozilla/5.0"
> >
> >  86 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 200 6773 "-"
> "Mozilla/5.0"
> > 130 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
> "KNewStuff/5.89.0-discoverupdate/5.23.5"
> > 136 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
> "KNewStuff/5.89.0-discoverupdate/5.23.4"
> > 197 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
> "KNewStuff/5.88.0-discoverupdate/5.23.5"
> > 199 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
> "KNewStuff/5.90.0-discoverupdate/5.23.90"
> >2624 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
> "KNewStuff/5.86.0-plasma-discover-update/"
> >2642 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
> "KNewStuff/5.90.0-discoverupdate/5.23.5"
> >6117 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
> "Mozilla/5.0"
> >
> > This indicates that the bug lies solely within Plasma's Discover
> component - more precisely it's updater.
> >
> > Examining the origin of these requests has indicated that some clients
> are making requests to these paths well in excess of several times a minute
> with a number of IP addresses appearing more 60 times in a 1 minute sized
> sample window.
> >
> > Given that Sysadmin has raised issues with this component and it's
> behaviour in the past, it appears that issues regarding the behaviour of
> the OCS componentry within Discover remain unresolved.
> >
> > Due to the level of distress this is causing our systems, I am therefore
> left with no other option other than to direct the Plasma Discover
> developers to create and release without delay patches for all versions in
> support, as well as for all those currently present in any actively
> maintained distributions, that disable all OCS functionality in the
> Discover updater. Distributions are requested to treat these patches as
> security patches and to distribute them to users without delay.
> >
> > In 24 hours time Sysadmin will be making a posting to kde-announce
> requesting that users immediately cease use of the Disco

Re: KF 5.91: 24 modules with failing unit tests (Re: Please fix failing unit tests with Windows platform)

2022-02-07 Thread Ben Cooksley
On Mon, Feb 7, 2022 at 10:56 PM Christoph Cullmann (cullmann.io) <
christ...@cullmann.io> wrote:

> On 2022-02-07 10:35, Ben Cooksley wrote:
> > On Sun, Feb 6, 2022 at 10:40 PM Friedrich W. H. Kossebau
> >  wrote:
> >
> >> Am Montag, 24. Januar 2022, 01:06:40 CET schrieb Friedrich W. H.
> >> Kossebau:
> >>> Hi,
> >>>
> >>> since a long time there are lots of failing unit tests across
> >> multiple
> >>> repositories. Could the Windows platform maintainers/stakeholders
> >> please
> >>> look soonish into either fixing those tests or properly marking
> >> them as
> >>> expected to fail, so the resources the KDE CI spends on running
> >> the tests
> >>> every hour, day and week make some sense again, as well as having
> >> something
> >>> usable to diff results again, to notice any new regressions?
> >>>
> >>> Please see
> >>>
> >>>
> >>
> >
> https://build.kde.org/job/Frameworks/view/Platform%20-%20WindowsMSVCQt5.15/
> >>> (best sort by "S" build status to get a list what need
> >>
> >> And those who believe in the broken windows theory also would claim
> >> this
> >> slacking now resulted in the regressions in the openSUSE builds,
> >> where 5
> >> modules now have failing unit tests at time of release tagging, when
> >> it once
> >> was 0 thanks to hard work of David F. and others. :(
> >>
> >> Is it time to remove
> >> https://community.kde.org/Frameworks/
> >> Policies#Frameworks_CI_failures_are_treated_as_stop_the_line_events
> >> because seemingly this is just old dead pixels on a web page and not
> >> the
> >> spirit these days?
> >
> > Not sure that is the ideal outcome here - preferrably our tests would
> > continue to all pass.
> >
> > I know some tests on certain platforms have been flaky and switch
> > between failing/passing - are we sure that isn't the driver of people
> > ignoring the results?
>
> One thing that sometimes lead me to ignore stuff with KTextEditor is
> that the UI
> tests are often very unstable.
>
> e.g. they just fail for me locally but then work perfectly in the CI or
> the other
> way around.
>
> Not sure how to improve that.
>

Interesting to note that it is GUI/UI tests that are causing issues. I
would have thought that the setup on the CI for those would be a carbon
copy almost every time which makes these failures interesting.
Out of curiosity, what are the tests trying to accomplish and where is it
failing?


>
> For all non-UI tests naturally no such problems exist for KTextEditor
> and they are easy to keep
> working.
>
> KSyntaxHighlighting only has non-UI test and that is very easy to keep
> in a consistent shape.
>
> Greetings
> Christoph
>

Thanks,
Ben


>
> >
> > Cheers,
> > Ben
> >
> >> Friedrich
>
> --
> Ignorance is bliss...
> https://cullmann.io | https://kate-editor.org
>


Re: KF 5.91: 24 modules with failing unit tests (Re: Please fix failing unit tests with Windows platform)

2022-02-07 Thread Ben Cooksley
On Sun, Feb 6, 2022 at 10:40 PM Friedrich W. H. Kossebau 
wrote:

> Am Montag, 24. Januar 2022, 01:06:40 CET schrieb Friedrich W. H. Kossebau:
> > Hi,
> >
> > since a long time there are lots of failing unit tests across multiple
> > repositories. Could the Windows platform maintainers/stakeholders please
> > look soonish into either fixing those tests or properly marking them as
> > expected to fail, so the resources the KDE CI spends on running the tests
> > every hour, day and week make some sense again, as well as having
> something
> > usable to diff results again, to notice any new regressions?
> >
> > Please see
> >
> >
> https://build.kde.org/job/Frameworks/view/Platform%20-%20WindowsMSVCQt5.15/
> > (best sort by "S" build status to get a list what need
>
> And those who believe in the broken windows theory also would claim this
> slacking now resulted in the regressions in the openSUSE builds, where 5
> modules now have failing unit tests at time of release tagging, when it
> once
> was 0 thanks to hard work of David F. and others. :(
>
> Is it time to remove
> https://community.kde.org/Frameworks/
> Policies#Frameworks_CI_failures_are_treated_as_stop_the_line_events
> because seemingly this is just old dead pixels on a web page and not the
> spirit these days?
>

Not sure that is the ideal outcome here - preferrably our tests would
continue to all pass.

I know some tests on certain platforms have been flaky and switch between
failing/passing - are we sure that isn't the driver of people ignoring the
results?

Cheers,
Ben


>
> Friedrich
>
>
>
>


Re: Critical Denial of Service bugs in Discover

2022-02-06 Thread Ben Cooksley
On Sun, Feb 6, 2022 at 1:07 PM Fabian Vogt  wrote:

> Hi,
>
> Am Samstag, 5. Februar 2022, 22:16:28 CET schrieb Ben Cooksley:
> > Hi all,
> >
> > Over the past week or so Sysadmin has been dealing with an extremely high
> > volume of traffic directed towards both download.kde.org and
> > distribute.kde.org.
> >
> > This traffic volume is curious in so far that it is directed at two paths
> > specifically:
> > - distribute.kde.org/khotnewstuff/fonts-providers.xml
> > - download.kde.org/ocs/providers.xml
> >
> > The first path is an "internal only" host which we were redirecting a
> > legacy path to prior to the resource being relocated to cdn.kde.org. The
> > second path has been legacy for numerous years now (more than 5) and is
> > replaced by autoconfig.kde.org.
> > It is of extreme concern that these paths are still in use - especially
> the
> > ocs/providers.xml one.
> >
> >...
> >
> > This indicates that the bug lies solely within Plasma's Discover
> component
> > - more precisely it's updater.
> >
> > Examining the origin of these requests has indicated that some clients
> are
> > making requests to these paths well in excess of several times a minute
> > with a number of IP addresses appearing more 60 times in a 1 minute sized
> > sample window.
>
> FWICT, this is caused by plasma-discover-update, which is triggered by the
> DiscoverNotifier service if automatic updates are enabled in kcm_updates,
> updates are available and the system idle for >=15min.
>
> // If the system is untouched for 1 hour, trigger the unattened update
> using namespace std::chrono_literals;
>
> KIdleTime::instance()->addIdleTimeout(int(std::chrono::milliseconds(15min).count()));
>
> (I wonder whether there's a bug about calling addIdleTimeout more than
> once.
> It will then invoke triggerUpdate multiple times after 15min of idle.)
>

That may explain why we are seeing so many requests from some IPs and very
few from others.


>
> The Discover KNS backend creates instances for all available knsrc files,
> which on construction call KNSReviews::setProviderUrl with the URL defined
> in
> those files, triggering the requests.
>

That does not sound scalable, and would certainly explain why not too long
ago we found that the traffic received by autoconfig.kde.org had grown to
such an extent we had to shift it to being handled by a CDN.
At the time I chalked the problem up to increasing popularity of our
software that included KNS functionality.


>
> The first URL is used by kfontinst.knsrc from plasma-workspace:
> ProvidersUrl=https://distribute.kde.org/khotnewstuff/fonts-providers.xml
>
> The second URL is used by multiple knsrc files in my VM:
> aurorae.knsrc:ProvidersUrl=https://download.kde.org/ocs/providers.xml
> comic.knsrc:ProvidersUrl=https://download.kde.org/ocs/providers.xml
> kwineffect.knsrc:ProvidersUrl=https://download.kde.org/ocs/providers.xml
> kwinscripts.knsrc:ProvidersUrl=https://download.kde.org/ocs/providers.xml
> kwinswitcher.knsrc:ProvidersUrl=https://download.kde.org/ocs/providers.xml
> wallpaperplugin.knsrc:ProvidersUrl=
> https://download.kde.org/ocs/providers.xml


This makes me incredibly sad. We had a push to eliminate all usage of the
legacy download.kde.org endpoint many years ago...
I have now resolved the majority of these - if distributions could please
pick up those patches that would be appreciated.

Please note that I have now terminated the support on the server that was
making these legacy endpoints work, so those patches are necessary to
restore functionality.


>
> > Given that Sysadmin has raised issues with this component and it's
> > behaviour in the past, it appears that issues regarding the behaviour of
> > the OCS componentry within Discover remain unresolved.
> >
> > Due to the level of distress this is causing our systems, I am therefore
> > left with no other option other than to direct the Plasma Discover
> > developers to create and release without delay patches for all versions
> in
> > support, as well as for all those currently present in any actively
> > maintained distributions, that disable all OCS functionality in the
> > Discover updater. Distributions are requested to treat these patches as
> > security patches and to distribute them to users without delay.
>
> Emergency workarounds for distributions might be to either not ship the KNS
> backend by not building kns-backend.so or deleting it afterwards, or
> disabling
> the discover notifier
> (/etc/xdg/autostart/org.kde.discover.notifier.desktop)
> completely.
>

I have now committed patches to Discover going back t

Critical Denial of Service bugs in Discover

2022-02-05 Thread Ben Cooksley
Hi all,

Over the past week or so Sysadmin has been dealing with an extremely high
volume of traffic directed towards both download.kde.org and
distribute.kde.org.

This traffic volume is curious in so far that it is directed at two paths
specifically:
- distribute.kde.org/khotnewstuff/fonts-providers.xml
- download.kde.org/ocs/providers.xml

The first path is an "internal only" host which we were redirecting a
legacy path to prior to the resource being relocated to cdn.kde.org. The
second path has been legacy for numerous years now (more than 5) and is
replaced by autoconfig.kde.org.
It is of extreme concern that these paths are still in use - especially the
ocs/providers.xml one.

The volume of traffic has reached an extent that to prevent the server disk
filling up we have had to disable logging for those two sites. Whilst
dependent on the time of day the server is currently dealing with the
current volume of requests, which is far outside normal specifications:

555 requests/sec - 4.5 MB/second - 8.3 kB/request - .739199
ms/request

Analysis of a fragment of logs (comprising just a few minutes of traffic)
reveals the following:

 63 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-"
"KNewStuff/5.89.0-discoverupdate/5.23.5"
 64 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-"
"KNewStuff/5.89.0-discoverupdate/5.23.4"
104 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-"
"KNewStuff/5.90.0-discoverupdate/5.23.90"
105 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-"
"KNewStuff/5.88.0-discoverupdate/5.23.5"
   1169 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-"
"KNewStuff/5.86.0-plasma-discover-update/"
   1256 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-"
"KNewStuff/5.90.0-discoverupdate/5.23.5"
   2905 "GET /ocs/providers.xml HTTP/1.1" 301 6585 "-" "Mozilla/5.0"

 86 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 200 6773 "-"
"Mozilla/5.0"
130 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
"KNewStuff/5.89.0-discoverupdate/5.23.5"
136 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
"KNewStuff/5.89.0-discoverupdate/5.23.4"
197 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
"KNewStuff/5.88.0-discoverupdate/5.23.5"
199 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
"KNewStuff/5.90.0-discoverupdate/5.23.90"
   2624 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
"KNewStuff/5.86.0-plasma-discover-update/"
   2642 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
"KNewStuff/5.90.0-discoverupdate/5.23.5"
   6117 "GET /khotnewstuff/fonts-providers.xml HTTP/1.1" 304 6132 "-"
"Mozilla/5.0"

This indicates that the bug lies solely within Plasma's Discover component
- more precisely it's updater.

Examining the origin of these requests has indicated that some clients are
making requests to these paths well in excess of several times a minute
with a number of IP addresses appearing more 60 times in a 1 minute sized
sample window.

Given that Sysadmin has raised issues with this component and it's
behaviour in the past, it appears that issues regarding the behaviour of
the OCS componentry within Discover remain unresolved.

Due to the level of distress this is causing our systems, I am therefore
left with no other option other than to direct the Plasma Discover
developers to create and release without delay patches for all versions in
support, as well as for all those currently present in any actively
maintained distributions, that disable all OCS functionality in the
Discover updater. Distributions are requested to treat these patches as
security patches and to distribute them to users without delay.

In 24 hours time Sysadmin will be making a posting to kde-announce
requesting that users immediately cease use of the Discover update client
as it is creating a Denial of Service attack on our infrastructure.

Regards,
Ben Cooksley
KDE Sysadmin


Re: Maintainers of KDE Frameworks for the Windows platform?

2022-01-24 Thread Ben Cooksley
On Mon, Jan 24, 2022 at 10:48 PM Christoph Cullmann (cullmann.io) <
christ...@cullmann.io> wrote:

> On 2022-01-24 01:00, Friedrich W. H. Kossebau wrote:
> > Hi,
> >
> > in the past it was hard to find someone to fix things for KDE
> > Frameworks on
> > Windows, and too often people not interested in Windows had instead to
> > spend
> > their costly leisure time to solve problems, e.g. by debugging via CI
> > runs.
> >
> > I do not think we can expect from every contributor/patch author they
> > are
> > capable to understand and to solve things on all platforms. For one as
> > this
> > does not scale, and even more when the platform is a proprietary one
> > that
> > otherwise works against the mission of KDE and people rather avoid to
> > have to
> > know about it.
> >
> > So we need dedicated maintainer teams for each platform IMHO. And if
> > that team
> > is empty, have to drop the official support for that platform, instead
> > of e.g.
> > having it a "broken windows theory" thing on CI (pun intended).
> >
> > Given Linux (default, all the usual suspect contributors), FreeBSD
> > (Tobias,
> > Adriaan), and Android (some other usual suspect contributors) are
> > covered,
> > there is a reaction time the same day often, when help is needed with
> > those.
> > Other than for Windows (and macOS once it makes it to CI).
> >
> > Who would be available as contact person for KF @ Windows, so could be
> > reliably called in to solve code issues appearing in new work or
> > regressions
> > by external influences? Either by a to be created @teams tag or as
> > highly
> > available individuals?
> >
> > If we do not have enough people who can provide at least, say, weekly
> > work on
> > the Windows platform, I would propose to drop the official support, as
> > it is
> > an annoying burden to those who have no stakes on that platform.
> > And also harms the reputation of the KF product, because being badly
> > maintained and thus partially broken makes it into the developer/user
> > experience on those platforms, which then is mapped onto the whole
> > product
> > (rightfully), not just the support on that platform.
>
> I don't agree with that mindset.
>
> Naturally, as you point out in your other mail,
> the unit tests must be fixed.
>
> But beside that, I see Windows like any other platform,
> you need to ensure your changes don't kill it.
>
> It is not acceptable to commit stuff that breaks the e.g. FreeBSD
> CI, the same rule can be there for Windows, too.
>
> If you need help, you can ping people like me for Windows or we could
> create
> some @teams/windows or whatever.
>
> Beside that, I think in most cases, our code is on a level that doesn't
> really have that much operating specific parts.
>

I concur with Christoph's points here.


>
> There are special cases like baloo and Co., I actually would propose to
> not support such stuff on Windows (or non Linux) at all,
> not sure if it should be a Framework at all in that case.
>

A comprehensive list of what we currently support some form of CI for can
be found at
https://invent.kde.org/sysadmin/ci-management/-/blob/master/seeds/frameworks-latest.yml

Windows CI on Gitlab is not too far away - Frameworks is actually ready to
go as it were I just need a chance to try to run the seed jobs.
Alas other matters keep getting in the way (both within and outside of KDE)


>
> Greetings
> Christoph
>

Cheers,
Ben


>
> --
> Ignorance is bliss...
> https://cullmann.io | https://kate-editor.org
>


Re: Gitlab CI: failed unit tests vs. currently passing CI

2022-01-23 Thread Ben Cooksley
On Mon, Jan 24, 2022 at 12:56 AM Albert Astals Cid  wrote:

> El diumenge, 23 de gener de 2022, a les 1:59:01 (CET), Ben Cooksley va
> escriure:
> > On Sun, Jan 23, 2022 at 12:29 PM Albert Astals Cid 
> wrote:
> >
> > > El diumenge, 23 de gener de 2022, a les 0:09:23 (CET), Ben Cooksley va
> > > escriure:
> > > > On Sun, Jan 23, 2022 at 11:29 AM Albert Astals Cid 
> > > wrote:
> > > >
> > > > > El dissabte, 22 de gener de 2022, a les 6:11:29 (CET), Ben
> Cooksley va
> > > > > escriure:
> > > > > > EXCLUDE_DEPRECATED_BEFORE_AND_ATOn Sat, Jan 22, 2022 at 1:31 PM
> > > Friedrich
> > > > > > W. H. Kossebau  wrote:
> > > > > >
> > > > > > > Hi,
> > > > > >
> > > > > >
> > > > > > > seems that Gitlab CI is currently configured to show the green
> > > > > "Success"
> > > > > > > checkmark for pipeline runs even if unit tests are failing.
> > > > > > >
> > > > > >
> > > > > > That is correct, only compilation or other internal failures
> cause
> > > the
> > > > > > build to show a failure result.
> > > > > >
> > > > > >
> > > > > > > Reasons seems to be that there Gitlab only knows Yay or Nay,
> > > without
> > > > > the
> > > > > > > warning state level known from Jenkins.
> > > > > > >
> > > > > >
> > > > > > Also correct.
> > > > > >
> > > > > >
> > > > > > > And given that quite some projects (sadly) maintain a few
> long-time
> > > > > > > failing
> > > > > > > unit tests, having the pipeline fail on unit tests seems to
> have
> > > been
> > > > > seen
> > > > > > > as
> > > > > > > too aggressive
> > > > > >
> > > > > >
> > > > > > Correct again.
> > > > > >
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > This of course harms the purpose of the unit tests, when their
> > > failures
> > > > > > > are
> > > > > > > only noticed weeks later, not e.g. at MR discussion time.
> > > > > > >
> > > > > >
> > > > > > Gitlab does note changes in the test suite as can currently be
> seen
> > > on
> > > > > > https://invent.kde.org/frameworks/kio/-/merge_requests/708
> > > > > > Quoting the page:  "Test summary contained 33 failed and 16 fixed
> > > test
> > > > > > results out of 205 total tests"
> > > > > >
> > > > > > It does the same thing for Code Quality - "Code quality scanning
> > > detected
> > > > > > 51 changes in merged results"
> > > > >
> > > > > Don't want to derail the confirmation, but those results are
> terrible,
> > > > > they always say things changed in places not touched by the code of
> > > the MR,
> > > > > any idea why?
> > > > >
> > > >
> > > > Unfortunately not - my only guess would be that cppcheck reports
> results
> > > > slightly differently which Gitlab has issues interpreting.
> > > >
> > > >
> > > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > Seeing how at least in KDE Frameworks first regressions
> sneaked in
> > > > > without
> > > > > > > someone noticing (nobody looks at logs when the surface shows a
> > > green
> > > > > > > checkmark, e.g. kcoreaddons, kwidgetsaddons, kio, purpose,
> krunner
> > > on
> > > > > > > openSUSE
> > > > > > > and possibly more have regressed in recent weeks, see
> > > build.kde.org)
> > > > > this
> > > > > > > should be something to deal with better, right?
> > > > > >
> > > > > >
> > > > > > > Bhushan gave two first ideas just now on #kde-sysadmin:
> > > > > > > > Well we can add a switch that repos can commit to saying test
> > &g

Re: Gitlab CI: failed unit tests vs. currently passing CI

2022-01-22 Thread Ben Cooksley
On Sun, Jan 23, 2022 at 12:38 PM Albert Astals Cid  wrote:

> El diumenge, 23 de gener de 2022, a les 0:09:23 (CET), Ben Cooksley va
> escriure:
> > On Sun, Jan 23, 2022 at 11:29 AM Albert Astals Cid 
> wrote:
> >
> > > El dissabte, 22 de gener de 2022, a les 6:11:29 (CET), Ben Cooksley va
> > > escriure:
> > > > EXCLUDE_DEPRECATED_BEFORE_AND_ATOn Sat, Jan 22, 2022 at 1:31 PM
> Friedrich
> > > > W. H. Kossebau  wrote:
> > > >
> > > > > Hi,
> > > >
> > > >
> > > > > seems that Gitlab CI is currently configured to show the green
> > > "Success"
> > > > > checkmark for pipeline runs even if unit tests are failing.
> > > > >
> > > >
> > > > That is correct, only compilation or other internal failures cause
> the
> > > > build to show a failure result.
> > > >
> > > >
> > > > > Reasons seems to be that there Gitlab only knows Yay or Nay,
> without
> > > the
> > > > > warning state level known from Jenkins.
> > > > >
> > > >
> > > > Also correct.
> > > >
> > > >
> > > > > And given that quite some projects (sadly) maintain a few long-time
> > > > > failing
> > > > > unit tests, having the pipeline fail on unit tests seems to have
> been
> > > seen
> > > > > as
> > > > > too aggressive
> > > >
> > > >
> > > > Correct again.
> > > >
> > > >
> > > > >
> > > > >
> > > > > This of course harms the purpose of the unit tests, when their
> failures
> > > > > are
> > > > > only noticed weeks later, not e.g. at MR discussion time.
> > > > >
> > > >
> > > > Gitlab does note changes in the test suite as can currently be seen
> on
> > > > https://invent.kde.org/frameworks/kio/-/merge_requests/708
> > > > Quoting the page:  "Test summary contained 33 failed and 16 fixed
> test
> > > > results out of 205 total tests"
> > > >
> > > > It does the same thing for Code Quality - "Code quality scanning
> detected
> > > > 51 changes in merged results"
> > >
> > > Don't want to derail the confirmation, but those results are terrible,
> > > they always say things changed in places not touched by the code of
> the MR,
> > > any idea why?
> > >
> >
> > Unfortunately not - my only guess would be that cppcheck reports results
> > slightly differently which Gitlab has issues interpreting.
>
> Can we just disable it?
>

Various things can be configured on a per-project basis. cppcheck is one of
them.
See
https://invent.kde.org/sysadmin/ci-utilities/-/blob/master/config-template.yml#L21


>
> Look at the results here
> https://invent.kde.org/graphics/okular/-/merge_requests/544
>
> Major - Either the condition 'printDialog' is redundant or there is
> possible null pointer dereference: printDialog. (CWE-476)
> in part/part.cpp:3341
>
> Fixed: Major - Either the condition 'printDialog' is redundant or there is
> possible null pointer dereference: printDialog. (CWE-476)
> in part/part.cpp:3340
>
> gitlab my friend, don't you think that maybe, just maybe this is the same
> code and you shouldn't complain to me about it since the only change to
> that file is 3000 lines away from it?
>

This is possibly cppcheck's fault, but yes not terribly good work there on
fuzzing for line changes.


>
> I find it confusing, it always makes me sad lowering my productivity.
>
> Cheers,
>   Albert
>
>
>
Cheers,
Ben


Re: Gitlab CI: failed unit tests vs. currently passing CI

2022-01-22 Thread Ben Cooksley
On Sun, Jan 23, 2022 at 12:29 PM Albert Astals Cid  wrote:

> El diumenge, 23 de gener de 2022, a les 0:09:23 (CET), Ben Cooksley va
> escriure:
> > On Sun, Jan 23, 2022 at 11:29 AM Albert Astals Cid 
> wrote:
> >
> > > El dissabte, 22 de gener de 2022, a les 6:11:29 (CET), Ben Cooksley va
> > > escriure:
> > > > EXCLUDE_DEPRECATED_BEFORE_AND_ATOn Sat, Jan 22, 2022 at 1:31 PM
> Friedrich
> > > > W. H. Kossebau  wrote:
> > > >
> > > > > Hi,
> > > >
> > > >
> > > > > seems that Gitlab CI is currently configured to show the green
> > > "Success"
> > > > > checkmark for pipeline runs even if unit tests are failing.
> > > > >
> > > >
> > > > That is correct, only compilation or other internal failures cause
> the
> > > > build to show a failure result.
> > > >
> > > >
> > > > > Reasons seems to be that there Gitlab only knows Yay or Nay,
> without
> > > the
> > > > > warning state level known from Jenkins.
> > > > >
> > > >
> > > > Also correct.
> > > >
> > > >
> > > > > And given that quite some projects (sadly) maintain a few long-time
> > > > > failing
> > > > > unit tests, having the pipeline fail on unit tests seems to have
> been
> > > seen
> > > > > as
> > > > > too aggressive
> > > >
> > > >
> > > > Correct again.
> > > >
> > > >
> > > > >
> > > > >
> > > > > This of course harms the purpose of the unit tests, when their
> failures
> > > > > are
> > > > > only noticed weeks later, not e.g. at MR discussion time.
> > > > >
> > > >
> > > > Gitlab does note changes in the test suite as can currently be seen
> on
> > > > https://invent.kde.org/frameworks/kio/-/merge_requests/708
> > > > Quoting the page:  "Test summary contained 33 failed and 16 fixed
> test
> > > > results out of 205 total tests"
> > > >
> > > > It does the same thing for Code Quality - "Code quality scanning
> detected
> > > > 51 changes in merged results"
> > >
> > > Don't want to derail the confirmation, but those results are terrible,
> > > they always say things changed in places not touched by the code of
> the MR,
> > > any idea why?
> > >
> >
> > Unfortunately not - my only guess would be that cppcheck reports results
> > slightly differently which Gitlab has issues interpreting.
> >
> >
> > >
> > > >
> > > >
> > > > >
> > > > > Seeing how at least in KDE Frameworks first regressions sneaked in
> > > without
> > > > > someone noticing (nobody looks at logs when the surface shows a
> green
> > > > > checkmark, e.g. kcoreaddons, kwidgetsaddons, kio, purpose, krunner
> on
> > > > > openSUSE
> > > > > and possibly more have regressed in recent weeks, see
> build.kde.org)
> > > this
> > > > > should be something to deal with better, right?
> > > >
> > > >
> > > > > Bhushan gave two first ideas just now on #kde-sysadmin:
> > > > > > Well we can add a switch that repos can commit to saying test
> > > failure is
> > > > > build failure
> > > > > > Another alternative is we use bot to write a comment on MR
> > > > >
> > > > > IMHO, to give unit tests the purpose they have, we should by
> default to
> > > > > let
> > > > > test failures be build failures. And have projects opt out if they
> > > need to
> > > > > have some unit tests keep failing, instead of e.g. tagging them
> with
> > > > > expected
> > > > > failures or handling any special environment they run into on the
> CI.
> > > > >
> > > > > Your opinions?
> > > > >
> > > >
> > > > The switch will need to be around the other way i'm afraid as there
> are
> > > > simply too many projects with broken tests right now.
> > > > The best place for that switch will be in .kde-ci.yml.
> > > >
> > > > My only concern however would be abuse of this switch, much in the
> way
> > > that
> > 

Re: Gitlab CI: failed unit tests vs. currently passing CI

2022-01-22 Thread Ben Cooksley
On Sun, Jan 23, 2022 at 11:29 AM Albert Astals Cid  wrote:

> El dissabte, 22 de gener de 2022, a les 6:11:29 (CET), Ben Cooksley va
> escriure:
> > EXCLUDE_DEPRECATED_BEFORE_AND_ATOn Sat, Jan 22, 2022 at 1:31 PM Friedrich
> > W. H. Kossebau  wrote:
> >
> > > Hi,
> >
> >
> > > seems that Gitlab CI is currently configured to show the green
> "Success"
> > > checkmark for pipeline runs even if unit tests are failing.
> > >
> >
> > That is correct, only compilation or other internal failures cause the
> > build to show a failure result.
> >
> >
> > > Reasons seems to be that there Gitlab only knows Yay or Nay, without
> the
> > > warning state level known from Jenkins.
> > >
> >
> > Also correct.
> >
> >
> > > And given that quite some projects (sadly) maintain a few long-time
> > > failing
> > > unit tests, having the pipeline fail on unit tests seems to have been
> seen
> > > as
> > > too aggressive
> >
> >
> > Correct again.
> >
> >
> > >
> > >
> > > This of course harms the purpose of the unit tests, when their failures
> > > are
> > > only noticed weeks later, not e.g. at MR discussion time.
> > >
> >
> > Gitlab does note changes in the test suite as can currently be seen on
> > https://invent.kde.org/frameworks/kio/-/merge_requests/708
> > Quoting the page:  "Test summary contained 33 failed and 16 fixed test
> > results out of 205 total tests"
> >
> > It does the same thing for Code Quality - "Code quality scanning detected
> > 51 changes in merged results"
>
> Don't want to derail the confirmation, but those results are terrible,
> they always say things changed in places not touched by the code of the MR,
> any idea why?
>

Unfortunately not - my only guess would be that cppcheck reports results
slightly differently which Gitlab has issues interpreting.


>
> >
> >
> > >
> > > Seeing how at least in KDE Frameworks first regressions sneaked in
> without
> > > someone noticing (nobody looks at logs when the surface shows a green
> > > checkmark, e.g. kcoreaddons, kwidgetsaddons, kio, purpose, krunner on
> > > openSUSE
> > > and possibly more have regressed in recent weeks, see build.kde.org)
> this
> > > should be something to deal with better, right?
> >
> >
> > > Bhushan gave two first ideas just now on #kde-sysadmin:
> > > > Well we can add a switch that repos can commit to saying test
> failure is
> > > build failure
> > > > Another alternative is we use bot to write a comment on MR
> > >
> > > IMHO, to give unit tests the purpose they have, we should by default to
> > > let
> > > test failures be build failures. And have projects opt out if they
> need to
> > > have some unit tests keep failing, instead of e.g. tagging them with
> > > expected
> > > failures or handling any special environment they run into on the CI.
> > >
> > > Your opinions?
> > >
> >
> > The switch will need to be around the other way i'm afraid as there are
> > simply too many projects with broken tests right now.
> > The best place for that switch will be in .kde-ci.yml.
> >
> > My only concern however would be abuse of this switch, much in the way
> that
> > certain projects abuse EXCLUDE_DEPRECATED_BEFORE_AND_AT.
> > The last thing we would want would be for people to flip this switch and
> > then leave their CI builds in a failing state - meaning that actual
> > compilation failures would be missed (and then lead to CI maintenance
> > issues)
> >
> > Thoughts on that?
>
> Test failing should mark the CI as failed, anything other than that
> doesn't make sense. The CI did fail marking it as passed is lying to
> ourselves.


> We can *still* merge failed MR with failed CI, the Merge button is just
> red, but it will work.
>

There is a big difference between "this doesn't compile" (because someone
forgot to commit a header/etc, dependency change that isn't in place or
because of a platform specific issue) and "some tests failed".
What that encourages is for people to ignore the results from the CI system
- as they'll get used to ignoring the CI system saying something is failing.

While this is not such a big deal for Linux, it is a massive deal for the
smaller platforms that far less people run.

Saying you can merge when the CI says it is failing is setting ourselves up
for failure.


> Maybe this red button will convince people to fix their tests. (one can
> hope, right?)
>
> Of course if we do this change it should happen after we've done that
> change that fixes the test failing because of however gitlab CI is set-up
> (you mentioned we had to wait for Jenkins to be disabled for that)
>
> Cheers,
>   Albert
>

Regards,
Ben


>
> >
> >
> > >
> > > Cheers
> > > Friedrich
> > >
> >
> > Cheers,
> > Ben
> >
>
>
>
>
>


Re: Gitlab CI: failed unit tests vs. currently passing CI

2022-01-21 Thread Ben Cooksley
EXCLUDE_DEPRECATED_BEFORE_AND_ATOn Sat, Jan 22, 2022 at 1:31 PM Friedrich
W. H. Kossebau  wrote:

> Hi,


> seems that Gitlab CI is currently configured to show the green "Success"
> checkmark for pipeline runs even if unit tests are failing.
>

That is correct, only compilation or other internal failures cause the
build to show a failure result.


> Reasons seems to be that there Gitlab only knows Yay or Nay, without the
> warning state level known from Jenkins.
>

Also correct.


> And given that quite some projects (sadly) maintain a few long-time
> failing
> unit tests, having the pipeline fail on unit tests seems to have been seen
> as
> too aggressive


Correct again.


>
>
> This of course harms the purpose of the unit tests, when their failures
> are
> only noticed weeks later, not e.g. at MR discussion time.
>

Gitlab does note changes in the test suite as can currently be seen on
https://invent.kde.org/frameworks/kio/-/merge_requests/708
Quoting the page:  "Test summary contained 33 failed and 16 fixed test
results out of 205 total tests"

It does the same thing for Code Quality - "Code quality scanning detected
51 changes in merged results"


>
> Seeing how at least in KDE Frameworks first regressions sneaked in without
> someone noticing (nobody looks at logs when the surface shows a green
> checkmark, e.g. kcoreaddons, kwidgetsaddons, kio, purpose, krunner on
> openSUSE
> and possibly more have regressed in recent weeks, see build.kde.org) this
> should be something to deal with better, right?


> Bhushan gave two first ideas just now on #kde-sysadmin:
> > Well we can add a switch that repos can commit to saying test failure is
> build failure
> > Another alternative is we use bot to write a comment on MR
>
> IMHO, to give unit tests the purpose they have, we should by default to
> let
> test failures be build failures. And have projects opt out if they need to
> have some unit tests keep failing, instead of e.g. tagging them with
> expected
> failures or handling any special environment they run into on the CI.
>
> Your opinions?
>

The switch will need to be around the other way i'm afraid as there are
simply too many projects with broken tests right now.
The best place for that switch will be in .kde-ci.yml.

My only concern however would be abuse of this switch, much in the way that
certain projects abuse EXCLUDE_DEPRECATED_BEFORE_AND_AT.
The last thing we would want would be for people to flip this switch and
then leave their CI builds in a failing state - meaning that actual
compilation failures would be missed (and then lead to CI maintenance
issues)

Thoughts on that?


>
> Cheers
> Friedrich
>

Cheers,
Ben


[sysadmin/ci-management] seeds: Due to Plasma Framework having a hard dependency on KGlobalAccel we are forced to build it on Windows as well.

2022-01-05 Thread Ben Cooksley
Git commit b22096531b29c572197a6c5e78f0248ab5e84274 by Ben Cooksley.
Committed on 05/01/2022 at 18:37.
Pushed by bcooksley into branch 'master'.

Due to Plasma Framework having a hard dependency on KGlobalAccel we are forced 
to build it on Windows as well.
Just like KAuth/KTextEditor this falls into the category of no-op so we should 
probably look at a way of eliminating this hard dependency (unless 
plasma-framework makes no sense on Windows either of course...)

CCMAIL: kde-frameworks-devel@kde.org

M  +1-1seeds/frameworks-latest.yml

https://invent.kde.org/sysadmin/ci-management/commit/b22096531b29c572197a6c5e78f0248ab5e84274

diff --git a/seeds/frameworks-latest.yml b/seeds/frameworks-latest.yml
index 4b86e04..90ce94a 100644
--- a/seeds/frameworks-latest.yml
+++ b/seeds/frameworks-latest.yml
@@ -57,6 +57,7 @@
 "frameworks/kdoctools": "master"
 "frameworks/kemoticons": "master"
 "frameworks/kfilemetadata": "master"
+"frameworks/kglobalaccel": "master"
 "frameworks/khtml": "master"
 "frameworks/kidletime": "master"
 "frameworks/kinit": "master"
@@ -85,7 +86,6 @@
 "frameworks/baloo": "master"
 "frameworks/kactivities-stats": "master"
 "frameworks/kdesu": "master"
-"frameworks/kglobalaccel": "master"
 "frameworks/kpty": "master"
 "frameworks/kwayland": "master"
 


[sysadmin/ci-management] seeds: Due to KTextEditor having a hard dependency on KAuth on Windows, we need to include KAuth in the seed for build on Windows.

2022-01-04 Thread Ben Cooksley
Git commit 63591f66d36d7914b847e471ee6f3e789cbcb4cf by Ben Cooksley.
Committed on 05/01/2022 at 03:58.
Pushed by bcooksley into branch 'master'.

Due to KTextEditor having a hard dependency on KAuth on Windows, we need to 
include KAuth in the seed for build on Windows.

CCMAIL: kde-frameworks-devel@kde.org

M  +1-2seeds/frameworks-latest.yml

https://invent.kde.org/sysadmin/ci-management/commit/63591f66d36d7914b847e471ee6f3e789cbcb4cf

diff --git a/seeds/frameworks-latest.yml b/seeds/frameworks-latest.yml
index 3837da2..4b86e04 100644
--- a/seeds/frameworks-latest.yml
+++ b/seeds/frameworks-latest.yml
@@ -47,6 +47,7 @@
 "frameworks/breeze-icons": "master"
 "frameworks/frameworkintegration": "master"
 "frameworks/kactivities": "master"
+"frameworks/kauth": "master"
 "frameworks/kdbusaddons": "master"
 "frameworks/kdeclarative": "master"
 "frameworks/kdelibs4support": "master"
@@ -81,10 +82,8 @@
   'require':
 "libraries/plasma-wayland-protocols": "master"
 "libraries/polkit-qt-1": "master"
-
 "frameworks/baloo": "master"
 "frameworks/kactivities-stats": "master"
-"frameworks/kauth": "master"
 "frameworks/kdesu": "master"
 "frameworks/kglobalaccel": "master"
 "frameworks/kpty": "master"


[frameworks/ktexteditor] /: KTextEditor has a hard dependency on KAuth - ensure it is available.

2022-01-04 Thread Ben Cooksley
Git commit b83f75b7be0c289a55e4afc1d6281c2f5c9fbffa by Ben Cooksley.
Committed on 05/01/2022 at 03:56.
Pushed by bcooksley into branch 'master'.

KTextEditor has a hard dependency on KAuth - ensure it is available.
On Linux/FreeBSD this is normally received via KIO - however it only requires 
KAuth on those two platforms meaning the build will fail on Windows without 
this explicit dependency.

Given that KAuth is a no-op on Windows it would be worthwhile investigating if 
KTextEditor can have an optional dependency on KAuth as well.

CCMAIL: kde-frameworks-devel@kde.org

M  +1-0.kde-ci.yml

https://invent.kde.org/frameworks/ktexteditor/commit/b83f75b7be0c289a55e4afc1d6281c2f5c9fbffa

diff --git a/.kde-ci.yml b/.kde-ci.yml
index f58a20f6..280bc31b 100644
--- a/.kde-ci.yml
+++ b/.kde-ci.yml
@@ -3,6 +3,7 @@ Dependencies:
   'require':
 'frameworks/extra-cmake-modules': '@same'
 'frameworks/karchive' : '@same'
+'frameworks/kauth': '@same'
 'frameworks/kconfig' : '@same'
 'frameworks/kguiaddons' : '@same'
 'frameworks/ki18n' : '@same'


Re: Gitlab CI for Windows

2022-01-04 Thread Ben Cooksley
On Wed, Jan 5, 2022 at 8:53 AM Christoph Cullmann (cullmann.io) <
christ...@cullmann.io> wrote:

> On 2022-01-04 20:23, Ben Cooksley wrote:
> > On Wed, Jan 5, 2022 at 6:36 AM Christoph Cullmann (cullmann.io [1])
> >  wrote:
> >
> >> On 2022-01-04 18:24, Ben Cooksley wrote:
> >>> Hi all,
> >>>
> >>> Next update in this saga appears to be a defect in KDeclarative,
> >> which
> >>> apparently has a hard dependency on KGlobalAccel.
> >>> https://invent.kde.org/sysadmin/ci-management/-/jobs/195039
> >>>
> >>> While this is something that we have previously built on Windows,
> >> from
> >>> my understanding it is essentially a no-op that does nothing so we
> >>> should probably skip building it.
> >>>
> >>> Can someone please take a look into this and advise whether
> >>> KDeclarative can also make it optional?
> >>
> >> Hi,
> >>
> >> I can take a look.
> >
> > If you could, that would be much appreciated.
>
> Nicolas was faster :=)
>
> I would assume master should already build.
>
> I have some additional small patch here
>
>
> https://invent.kde.org/frameworks/kdeclarative/-/commit/7200ad3d518f199ac040afcaf8d3330fd3f79ab7
>
> Btw., it seems the unit tests fail in the classic CI.
> Is it possible that some data/ dir is created in bin/ for Windows?
> This seems to confuse the auto tests where to find their input.
>

Yes, that is to be expected.

On Windows QStandardPaths expects to find resources in the folder data/
immediately relative to the executable.
As our executables are installed into bin/ we therefore have to install
those resources (which on a OSS Unix system would be in $prefix/share/)
into $prefix/bin/data/

The CI Tooling also does some additional tweaking in that department by
copying in the resources from the libraries made available via Craft to
that location as well (otherwise things like shared-mime-database won't be
found)


> Greetings
> Christoph
>

Cheers,
Ben


> >
> >> Greetings
> >> Christoph
> >
> > Cheers,
> > Ben
> >
> >>>
> >>> Thanks,
> >>> Ben
> >>>
> >>> On Tue, Jan 4, 2022 at 7:51 AM Ben Cooksley 
> >> wrote:
> >>>
> >>>> On Mon, Jan 3, 2022 at 9:00 AM Ben Cooksley 
> >>>> wrote:
> >>>>
> >>>>> Hi all,
> >>>>>
> >>>>> Over the past few days substantial progress has been made in
> >>>>> getting Windows builds running under Gitlab, to the point where
> >>>>> some Frameworks are now successfully compiling.
> >>>>>
> >>>>> Unfortunately we've run into a little issue with breeze-icons as
> >>>>> can be seen at
> >>>>> https://invent.kde.org/sysadmin/ci-management/-/jobs/193039
> >>>>
> >>>> Following investigation and some testing by Harald we've
> >> confirmed
> >>>> that this is a CMake bug - with it being unable to handle
> >> symlinks
> >>>> on Windows correctly.
> >>>> For now I shall workaround the issue by disabling use of symlinks
> >> on
> >>>> Windows in Git (git config --system core.symlinks false) however
> >>>> that is not an ideal long term fix.
> >>>>
> >>>> Do we have any contacts at CMake we can escalate this bug to?
> >>>>
> >>>> As for why this didn't show up earlier - it seems our Windows
> >>>> builders for Jenkins have symlinks disabled (indicating that
> >> either
> >>>> the feature was still too experimental back then or that we did
> >> hit
> >>>> this back then and worked around it then as well)
> >>>>
> >>>>> Any ideas?
> >>>>>
> >>>>> Thanks,
> >>>>> Ben
> >>>>
> >>>> Cheers,
> >>>> Ben
> >>
> >> --
> >> Ignorance is bliss...
> >> https://cullmann.io | https://kate-editor.org
> >
> >
> > Links:
> > --
> > [1] http://cullmann.io
>
> --
> Ignorance is bliss...
> https://cullmann.io | https://kate-editor.org
>


  1   2   3   4   5   6   7   8   9   10   >