On Mon, May 12, 2025 at 04:12:35PM -0700, Marc Herbert wrote:
> Thanks for the prompt feedback!
> 
> On 2025-05-12 11:35, Alison Schofield wrote:
> 
> > Since this patch is doing 2 things, the the journalctl timing, and
> > the parse of additional messages, I would typically ask for 2 patches,
> > but - I want to do even more. I want to revive an old, unmerged set
> > tackling similar work and get it all tidy'd up at once.
> > 
> > https://lore.kernel.org/all/cover.1701143039.git.alison.schofi...@intel.com/
> >   cxl/test: add and use cxl_common_[start|stop] helpers
> >   cxl/test: add a cxl_ derivative of check_dmesg()
> >   cxl/test: use an explicit --since time in journalctl
> > 
> > Please take a look at how the prev patch did journalctl start time.
> 
> We've been using a "start time" in
> https://github.com/thesofproject/sof-test for many years and it's been
> only "OK", not great. I did not know about the $SECONDS magic variable
> at the time, otherwise I would have tried it in sof-test! The main
> advantage of $SECONDS: there is nothing to do, meaning there is no
> "cxl_common_start()" to forget or do wrong. Speaking of which: I tested
> this patch on the _entire_ ndctl/test, not just with --suite=cxl whereas
> https://lore.kernel.org/all/d76c005105b7612dc47ccd19e102d462c0f4fc1b.1701143039.git.alison.schofi...@intel.com/
> seems to have a CXL-specific "cxl_common_start()" only?
> 
> Also, in my experience some sort of short COOLDOWN is always necessary
> anyway for various reasons:
> - Some tests can sometimes have "after shocks" and a cooldown helps
>   with most of these.
> - A short gap in the logs really help with their _readability_.
> - Clocks can shift, especially inside QEMU (I naively tried to increase
>   the number of cores in run_qemu.sh but had to give up due so "clock skew")
> - Others I probably forgot.
> 
> On my system, the average, per-test duration is about 10s and I find that
> 10% is an acceptable price to pay for the peace of mind. But a starttime
> should hopefully work too, at least for the majority of the time.
>
> 
> > I believe the kmesg_fail... can be used to catch any of the failed
> > sorts that the old series wanted to do.
> 
> Yes it does, I tried to explain that but afraid my English wasn't good
> enough?
> 
> > Maybe add a brief write up of how to use the kmesg choices per
> > test and in the common code.
> 
> Q.E.D ;-)
> 
> > Is the new kmesg approach going to fail on any ERROR or WARNING that
> > we don't kmesg_no_fail_on ?
> 
> Yes, this is the main purpose. The other feature is failing when
> any of the _expected_ ERROR or WARNING is not found.
> 
> > And then can we simply add dev_dbg() messages to fail if missing.
> 
> I'm afraid you just lost me at this point... my patch already does that
> without any dev_dbg()...?

Let me rephrase that - can we simply add dev_dbg() messages to the
'kmesg_' fail scheme, like in my check_dmesg() patch.

> 
> > I'll take a further look for example at the poison test. We want
> > it to warn that the poison is in a region. That is a good and
> > expected warning.  However, if that warn is missing, then the test
> > is broken! It might not 'FAIL', but it's no longer doing what we
> > want.
> 
> I agree: the expected "poison inject" and "poison clear" messages should
> be in the kmsg_fail_if_missing array[], not in the kmsg_no_fail_on[]
> array. BUT in my experience this makes cxl-poison.sh fail when run
> multiple times.  So yes: there seems to be a problem with this test.  (I
> should probably file a bug somewhere?) So I put them in
> kmsg_fail_if_missing[] for now because I don't have time to look into it
> now and I don't think a problem in a single test should hold back the
> improvement for the entire suite that exposes it. Even with just
> kmsg_no_fail_on[], this test is still better than now.
> 
> BTW this is a typical game of whack-a-mole every time you try to tighten
> a test screw. In SOF it took 4-5 years to finally catch all firmware
> errors: https://github.com/thesofproject/sof-test/issues/297
> 
> 
> 
> > So, let's work on a rev 2 that does all the things of both our
> > patches. I'm happy to work it with you, or not.
> 
> I agree the COOLDOWN / starttime is a separate feature. But... I needed it
> for the tests to pass! I find it important to keep the tests all passing
> in every commit for bisectability etc., hope you agree. Also, really hard
> to submit anything that does not pass the tests :-)
> 

How are the tests failing without the COOLDOWN now?

> As of now, the tests tolerate cross-test pollution. Being more
> demanding when inspecting the logs obviously makes them fail, at least
> sometimes. I agree the "timing" solution should go first, so here's
> a suggested plan:
> 
> 1. a) Either I resubmit my COOLDOWN alone,
>    b) or you generalize your cxl_common_start()/starttime to non-CXL tests.
> 
> No check_dmesg() change yet. "cxl_check_dmesg()" is abandoned forever.
> 
> Then:
> 
> 2. I rebase and resubmit my kmsg_no_fail_on=...
> 
> This will give more time for people to try and report any issue in the
> timing fix 1. - whichever is it.
> 
> In the 1.a) case, I think your [cxl_]common_start() de-duplication is
> 99% independent and can be submitted at any point.
> 
> 
> Thoughts?

Split them into a patchset for easier review and then I'll take
a look. Thanks!

> 
> PS: keep in mind I may be pulled in other priorities at any time :-(




Reply via email to