Background:

I have several customers that will be using a certain remote signing
service for signing their images, in order that the private keys are
never exposed outside that company's secure servers. This is done via
a pkcs#11 interface that talks to the remote signing server, and all
of that works quite well.

However, the way this particular signing service works is that one
must upfront create a "signing session", where one indicates which
keys one will use and, importantly, how many times each key will (may)
be used. Then, depending on the keys requested and the customer's
configuration, one or more humans must authorize that signing session
So for example, if official release keys are to be used, maybe two
different people from upper management must authorize, while if
development keys are requested, the developer himself can authorize
the session.

Once authorized, the requester receives a token that must then be used
for signing via one of the keys associated to that session.

I have that integrated in Yocto in a way that when a CI starts a BSP
build, it automatically works out which keys will be needed (e.g. one
for signing U-Boot, another for signing a kernel FIT image) based on
bitbake metadata, requests an appropriate signing session, and the
appropriate people are then notified and can then look at the details
of that CI pipeline and confirm that it is legitimate.

The problem:

The way mkimage does FIT image signing means that the remote server
can be asked to perform a signature an unbounded number of times, or
at least a number of times that cannot be determined upfront. This
means that currently, I need to artificially say that a kernel key
will be used, say, 10 times, even when only a single FIT image with
just one configuration node is created.

Part of the security model is that once the number of signings using a
given key has been depleted, the authorization token becomes useless
even if somehow leaked from the CI - and _if_ it is leaked/compromised
and abused before the CI has gotten around to do its signings, the
build will then fail with a clear indication of the
compromise. Clearly, having to specify a "high enough" expected use
count is counter to that part of the security model, because it will
inevitably leave some allowed uses behind.

While not perfect, we can give a reasonable estimate of an upper bound
on the necessary extra size by simply counting the number of hash and
signature nodes in the FIT image.

As indicated in the comments, one could probably make it even more
precise, and if there would ever be signatures larger than 512 bytes,
probably one would have to do that. But this works well enough in
practice for now, and is in fact an improvement in the normal case:
Currently, starting with size_inc of 0 is guaranteed to fail, so we
always enter the loop at least twice, even when not doing any signing
but merely filling hash values.

Just in case I've missed anything, keep the loop incrementing 1024
bytes at a time, and also, in case the estimate turns out to be over
64K, ensure that we do at least one attempt by changing to a do-while
loop.

With a little debug printf, creating a FIT image with three
configuration nodes previously resulted in

  Trying size_inc=0
  Trying size_inc=1024
  Trying size_inc=2048
  Trying size_inc=3072
  Succeeded at size_inc=3072

and dumping info from the signing session (where I've artifically
asked for 10 uses of the kernel key) shows

      "keyid": "kernel-dev-20250218",
      "usagecount": 9,
      "maxusagecount": 10

corresponding to 1+2+3+3 signatures requested (so while the loop count
is roughly linear in the number of config nodes, the number of
signings is quadratic).

With this, I instead get

  Trying size_inc=3456
  Succeeded at size_inc=3456

and the expected

      "keyid": "kernel-dev-20250218",
      "usagecount": 3,
      "maxusagecount": 10

thus allowing me to set maxusagecount correctly.

Signed-off-by: Rasmus Villemoes <r...@prevas.dk>
---
 tools/fit_image.c | 80 +++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 70 insertions(+), 10 deletions(-)

diff --git a/tools/fit_image.c b/tools/fit_image.c
index caed8d5f901..65c83e0b950 100644
--- a/tools/fit_image.c
+++ b/tools/fit_image.c
@@ -24,6 +24,65 @@
 
 static struct legacy_img_hdr header;
 
+static int fit_estimate_hash_sig_size(struct image_tool_params *params, const 
char *fname)
+{
+       bool signing = IMAGE_ENABLE_SIGN && (params->keydir || params->keyfile);
+       struct stat sbuf;
+       void *fdt;
+       int fd;
+       int estimate = 0;
+       int depth, noffset;
+       const char *name;
+
+       fd = mmap_fdt(params->cmdname, fname, 0, &fdt, &sbuf, false, true);
+       if (fd < 0)
+               return -EIO;
+
+       /*
+        * Walk the FIT image, looking for nodes named hash* and
+        * signature*. Since the interesting nodes are subnodes of an
+        * image or configuration node, we are only interested in
+        * those at depth exactly 3.
+        *
+        * The estimate for a hash node is based on a sha512 digest
+        * being 64 bytes, with another 64 bytes added to account for
+        * fdt structure overhead (the tags and the name of the
+        * "value" property).
+        *
+        * The estimate for a signature node is based on an rsa4096
+        * signature being 512 bytes, with another 512 bytes to
+        * account for fdt overhead and the various other properties
+        * (hashed-nodes etc.) that will also be filled in.
+        *
+        * One could try to be more precise in the estimates by
+        * looking at the "algo" property and, in the case of
+        * configuration signatures, the sign-images property. Also,
+        * when signing an already created FIT image, the hash nodes
+        * already have properly sized value properties, so one could
+        * also take pre-existence of "value" properties in hash nodes
+        * into account. But this rather simple approach should work
+        * well enough in practice.
+        */
+       for (depth = 0, noffset = fdt_next_node(fdt, 0, &depth);
+            noffset >= 0 && depth > 0;
+            noffset = fdt_next_node(fdt, noffset, &depth)) {
+               if (depth != 3)
+                       continue;
+
+               name = fdt_get_name(fdt, noffset, NULL);
+               if (!strncmp(name, FIT_HASH_NODENAME, 
strlen(FIT_HASH_NODENAME)))
+                       estimate += 128;
+
+               if (signing && !strncmp(name, FIT_SIG_NODENAME, 
strlen(FIT_SIG_NODENAME)))
+                       estimate += 1024;
+       }
+
+       munmap(fdt, sbuf.st_size);
+       close(fd);
+
+       return estimate;
+}
+
 static int fit_add_file_data(struct image_tool_params *params, size_t size_inc,
                             const char *tmpfile)
 {
@@ -806,16 +865,16 @@ static int fit_handle_file(struct image_tool_params 
*params)
        rename(tmpfile, bakfile);
 
        /*
-        * Set hashes for images in the blob. Unfortunately we may need more
-        * space in either FDT, so keep trying until we succeed.
-        *
-        * Note: this is pretty inefficient for signing, since we must
-        * calculate the signature every time. It would be better to calculate
-        * all the data and then store it in a separate step. However, this
-        * would be considerably more complex to implement. Generally a few
-        * steps of this loop is enough to sign with several keys.
+        * Set hashes for images in the blob and compute
+        * signatures. We do an attempt at estimating the expected
+        * extra size, but just in case that is not sufficient, keep
+        * trying adding 1K, with a reasonable upper bound of 64K
+        * total, until we succeed.
         */
-       for (size_inc = 0; size_inc < 64 * 1024; size_inc += 1024) {
+       size_inc = fit_estimate_hash_sig_size(params, bakfile);
+       if (size_inc < 0)
+               goto err_system;
+       do {
                if (copyfile(bakfile, tmpfile) < 0) {
                        printf("Can't copy %s to %s\n", bakfile, tmpfile);
                        ret = -EIO;
@@ -824,7 +883,8 @@ static int fit_handle_file(struct image_tool_params *params)
                ret = fit_add_file_data(params, size_inc, tmpfile);
                if (!ret || ret != -ENOSPC)
                        break;
-       }
+               size_inc += 1024;
+       } while (size_inc < 64 * 1024);
 
        if (ret) {
                fprintf(stderr, "%s Can't add hashes to FIT blob: %d\n",
-- 
2.49.0

Reply via email to