Hi,

Regarding the krunner timer-based test which I authored:

We could just delete the few lines that are enforcing those upper-limit 
timeouts and add some comments explaining that in real life those timeouts 
should have been met, I don't think we need to delete the entire test. I was 
explicitly asked to write a test when I submitted the MR to enforce that this 
functionally wouldn't regress in the future.

Also, I'm wondering... if we happen to have a compilation directive that's 
#defined in this CI environment only, or some framework function that returns 
weather we are in the CI environment or not, we could put some #ifndef (or 
if's) on those few upper-limit timeout lines and avoid compiling/running just 
those lines in the CI environment. I'm not familiar with this environment so I 
don't know if there is such a thing, I'm just wondering...

I'm available to submit a quick MR if you'd like me to, just give me some 
directions for what you'd like me to do.

[],
Eduardo

________________________________
From: Ben Cooksley <bcooks...@kde.org>
Sent: Sunday, March 13, 2022 1:53 PM
To: KDE Frameworks <kde-frameworks-devel@kde.org>
Cc: eduardo.c...@kdemail.net <eduardo.c...@kdemail.net>
Subject: Re: Unit tests all pass in Jenkins on Linux

On Mon, Mar 14, 2022 at 4:40 AM David Faure 
<fa...@kde.org<mailto:fa...@kde.org>> wrote:
After the recent discussions on state of CI, I fixed the last unittest failures 
(kio, purpose... + apol fixed ECM) so that
https://build.kde.org/job/Frameworks/view/Platform%20-%20SUSEQt5.15/
is all green^H^Hblue again.
Please keep it that way!

Thanks for looking into and fixing all of these David.


Note however that

* kwayland has a flaky test:

https://build.kde.org/job/Frameworks/view/Platform%20-%20SUSEQt5.15/job/kwayland/job/kf5-qt5%20SUSEQt5.15/171/testReport/junit/projectroot.autotests/client/kwayland_testDataDevice/

FAIL!  : TestDataDevice::testReplaceSource() Compared values are not the same
   Actual   (selectionOfferedSpy.count()): 1
   Expected (2)                          : 2
   Loc: [autotests/client/test_datadevice.cpp(557)]

Who can look at this one? git log mostly shows Martin Flöser 
<mgraess...@kde.org<mailto:mgraess...@kde.org>>
who I think isn't active anymore?

Not sure if it applies to KWayland as well, but I know that KWin has load 
sensitive tests (which is why the Gitlab .kde-ci.yml files support the flag 
tests-load-sensitive)
If this test appears to be flaky, then it is quite possible that it is load 
sensitive as well.


* krunner has a flaky test [2] because it measures time spent and expects small 
values like 65ms
(I changed that one to 100ms), 250ms, 300ms. With only 10% safety margins. On a 
busy CI system,
this is bound to fail regularly, even with bigger safety margins. In my 
experience this kind of test
is just not possible (we're not running on a real time OS), I vote for removing 
the test.
CC'ing Eduardo.

https://build.kde.org/job/Frameworks/view/Platform%20-%20SUSEQt5.15/job/krunner/job/kf5-qt5%20SUSEQt5.15/325/testReport/junit/projectroot/autotests/runnermanagertest/

Yes, that will definitely fail more often than not - your only way to make sure 
tests like this pass on our CI system is to set tests-load-sensitive=True (in 
Gitlab CI)
Note however that option should be avoided where possible as it means your 
build will stop and wait for load to fall to low levels before proceeding with 
running tests - which blocks a CI worker slot from being used by another 
project.

I'd also be in favour of removing this test.



--
David Faure, fa...@kde.org<mailto:fa...@kde.org>, http://www.davidfaure.fr
Working on KDE Frameworks 5




Cheers,
Ben

Reply via email to