On 11/25/25 11:50 AM, Quentin Schulz wrote:

Hello Quentin,

On 11/24/25 8:40 PM, Marek Vasut wrote:
[...]
---
  test/py/tests/test_fit_auto_signed.py | 98 +++++++++++----------------
  1 file changed, 39 insertions(+), 59 deletions(-)

diff --git a/test/py/tests/test_fit_auto_signed.py b/test/py/tests/ test_fit_auto_signed.py
index 0b5dbd5401c..3895e7d9369 100644
--- a/test/py/tests/test_fit_auto_signed.py
+++ b/test/py/tests/test_fit_auto_signed.py
@@ -138,6 +138,29 @@ class SignedFitHelper(object):
  @pytest.mark.buildconfigspec('fit_signature')
  @pytest.mark.requiredtool('fdtget')
  def test_fit_auto_signed(ubman):
+    def generate_and_check_fit_image(cmd, crc=False, simgs=False, scfgs=False, bl31present=False, key_name="", sign_algo="", verifier=""):

I would have grouped variables for the same "theme" together, e.g. simgs, scfgs with key_name, sign_algo and verifier for example. But that's fine as is.

They are grouped together, this way:
- Command
- Checker switches (crc, simgs, scfgs, bl31, tee)
- The rest of the params

+        """Generate fitImage and test for expected entries.
+
+        Generate a fitImage and test whether suitable entries are part of +        the generated fitImage. Test whether checksums and signatures are
+        part of the generated fitImage.

Would be nice to have documentation on what the parameters represent and what this returns/raises. Not a blocker though, I see we already don't do it for other functions.

I am not that deep in python, so if you could tell me how to do that or share an example, I can do that.

+        """
+        mkimage = ubman.config.build_dir + '/tools/mkimage'
+        utils.run_and_log(ubman, mkimage + cmd)
+
+        fit = SignedFitHelper(ubman, fit_file)
+        if fit.build_nodes_sets() == 0:
+            raise ValueError('FIT-1 has no "/image" nor "/ configuration" nodes')
+

Only downside is that this exception will be raised with the same string for all callers. We call it multiple times in the same test so it might be difficult to identify which one actually failed. We could try..except the exception, use the content of the exception with

try:
   generate_and_check_fit_image(...)
except Exception as e:
   pytest.fail(f'FIT-X failed: {str(e)}')

maybe? (NOT TESTED!)

Maybe it would be better to simply print the test configuration ?

Like this:

raise ValueError(f'FIT has no "/image" nor "/configuration" nodes, test settings: cmd={cmd} crc={crc} simgs={simgs} scfgs={scfgs} bl31present={bl31present} teepresent={teepresent} key_name={key_name} sign_algo={sign_algo} verifier={verifier}')

+        if crc:
+            fit.check_fit_crc32_images()
+        if simgs:
+            fit.check_fit_signed_images(key_name, sign_algo, verifier)
+        if scfgs:
+            fit.check_fit_signed_confgs(key_name, sign_algo)

This is a typo.... that already exists in the code (confgs instead of configs), so "fine".

Mmmm, we should also verify the signature of the configs, but it's a TODO in fit.check_fit_signed_confgs(). Maybe someone will implement that. I think it would be valuable as I would expect most should be using conf signature instead of image signature for added security (though I haven't found articles about that but I'm sure this has been raised a few times already).

Fixing that todo seems a bit out of scope here.

[...]

Reply via email to