On Thu, 2026-03-26 at 16:19 -0700, steven chen wrote:
> On 3/26/2026 10:30 AM, Roberto Sassu wrote:
> > From: Roberto Sassu <[email protected]>
> > 
> > Add support for sending a value N between 1 and ULONG_MAX to the staging
> > interface. This value represents the number of measurements that should be
> > deleted from the current measurements list.
> > 
> > This staging method allows the remote attestation agents to easily separate
> > the measurements that were verified (staged and deleted) from those that
> > weren't due to the race between taking a TPM quote and reading the
> > measurements list.
> > 
> > In order to minimize the locking time of ima_extend_list_mutex, deleting
> > N entries is realized by staging the entire current measurements list
> > (with the lock), by determining the N-th staged entry (without the lock),
> > and by splicing the entries in excess back to the current measurements list
> > (with the lock). Finally, the N entries are deleted (without the lock).
> > 
> > Flushing the hash table is not supported for N entries, since it would
> > require removing the N entries one by one from the hash table under the
> > ima_extend_list_mutex lock, which would increase the locking time.
> > 
> > The ima_extend_list_mutex lock is necessary in ima_dump_measurement_list()
> > because ima_queue_staged_delete_partial() uses __list_cut_position() to
> > modify ima_measurements_staged, for which no RCU-safe variant exists. For
> > the staging with prompt flavor alone, list_replace_rcu() could have been
> > used instead, but since both flavors share the same kexec serialization
> > path, the mutex is required regardless.
> > 
> > Link: https://github.com/linux-integrity/linux/issues/1
> > Suggested-by: Steven Chen <[email protected]>
> > Signed-off-by: Roberto Sassu <[email protected]>
> > ---
> >   security/integrity/ima/Kconfig     |  3 ++
> >   security/integrity/ima/ima.h       |  1 +
> >   security/integrity/ima/ima_fs.c    | 22 +++++++++-
> >   security/integrity/ima/ima_queue.c | 70 ++++++++++++++++++++++++++++++
> >   4 files changed, 95 insertions(+), 1 deletion(-)
> > 
> > diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
> > index e714726f3384..6ddb4e77bff5 100644
> > --- a/security/integrity/ima/Kconfig
> > +++ b/security/integrity/ima/Kconfig
> > @@ -341,6 +341,9 @@ config IMA_STAGING
> >       It allows user space to stage the measurements list for deletion and
> >       to delete the staged measurements after confirmation.
> >   
> > +     Or, alternatively, it allows user space to specify N measurements
> > +     entries to be deleted.
> > +
> >       On kexec, staging is reverted and staged measurements are prepended
> >       to the current measurements list when measurements are copied to the
> >       secondary kernel.
> > diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
> > index 699b735dec7d..de0693fce53c 100644
> > --- a/security/integrity/ima/ima.h
> > +++ b/security/integrity/ima/ima.h
> > @@ -319,6 +319,7 @@ struct ima_template_desc *lookup_template_desc(const 
> > char *name);
> >   bool ima_template_has_modsig(const struct ima_template_desc 
> > *ima_template);
> >   int ima_queue_stage(void);
> >   int ima_queue_staged_delete_all(void);
> > +int ima_queue_staged_delete_partial(unsigned long req_value);
> >   int ima_restore_measurement_entry(struct ima_template_entry *entry);
> >   int ima_restore_measurement_list(loff_t bufsize, void *buf);
> >   int ima_measurements_show(struct seq_file *m, void *v);
> > diff --git a/security/integrity/ima/ima_fs.c 
> > b/security/integrity/ima/ima_fs.c
> > index 39d9128e9f22..eb3f343c1138 100644
> > --- a/security/integrity/ima/ima_fs.c
> > +++ b/security/integrity/ima/ima_fs.c
> > @@ -28,6 +28,7 @@
> >    * Requests:
> >    * 'A\n': stage the entire measurements list
> >    * 'D\n': delete all staged measurements
> > + * '[1, ULONG_MAX]\n' delete N measurements entries
> >    */
> >   #define STAGED_REQ_LENGTH 21
> >   
> > @@ -319,6 +320,7 @@ static ssize_t ima_measurements_staged_write(struct 
> > file *file,
> >                                          size_t datalen, loff_t *ppos)
> >   {
> >     char req[STAGED_REQ_LENGTH];
> > +   unsigned long req_value;
> >     int ret;
> >   
> >     if (*ppos > 0 || datalen < 2 || datalen > STAGED_REQ_LENGTH)
> > @@ -346,7 +348,25 @@ static ssize_t ima_measurements_staged_write(struct 
> > file *file,
> >             ret = ima_queue_staged_delete_all();
> >             break;
> >     default:
> > -           ret = -EINVAL;
> > +           if (ima_flush_htable) {
> > +                   pr_debug("Deleting staged N measurements not supported 
> > when flushing the hash table is requested\n");
> > +                   return -EINVAL;
> > +           }
> > +
> > +           ret = kstrtoul(req, 10, &req_value);
> > +           if (ret < 0)
> > +                   return ret;
> > +
> > +           if (req_value == 0) {
> > +                   pr_debug("Must delete at least one entry\n");
> > +                   return -EINVAL;
> > +           }
> > +
> > +           ret = ima_queue_stage();
> > +           if (ret < 0)
> > +                   return ret;
> > +
> > +           ret = ima_queue_staged_delete_partial(req_value);
> The default processing is "Trim N" idea plus performance improvement.
> 
> Here do everything in one time. And this is what I said in v3.
> 
> [PATCH v3 1/3] ima: Remove ima_h_table structure 
> <https://lore.kernel.org/linux-integrity/[email protected]/T/#t>

In your approach you do:

lock ima_extend_measure_list_mutex
scan entries until N
cut list staged -> trim
unlock ima_extend_measure_list_mutex


In my approach I do:
lock ima_extend_measure_list_mutex
list replace active -> staged
unlock ima_extend_measure_list_mutex

scan entries until N

lock ima_extend_measure_list_mutex
cut list staged -> trim
splice staged ->active
unlock ima_extend_measure_list_mutex

So, I guess if you refer to less user space locking time, you mean one
lock/unlock and one list replace + list splice less in your solution.

In exchange, you propose to hold the lock in the kernel while scanning
N. I think it is a significant increase of kernel locking time vs a
negligible increase of user space locking time (in the kernel, all
processes need to wait for the ima_extend_measure_list_mutex to be
released, in user space it is just the agent waiting).

Roberto

> The important two parts of trimming is "trim N" and performance improvement.
> 
> The performance improvement include two parts:
>      hash table staging
>      active log list staging
> 
> And I think "Trim N" plus performance improvement is the right direction 
> to go.
> Lots of code for two steps "stage and trim" "stage" part can be removed.
> 
> Also race condition may happen if not holding the list all time in user 
> space
> during attestation period: from stage, read list, attestation and trimming.
> 
> So in order to improve the above user space lock time, "Trim T:N" can be 
> used
> not to hold list long in user space during attestation.
> 
> For Trim T:N, T represent total log trimmed since system boot up. Please 
> refer to
> https://lore.kernel.org/linux-integrity/[email protected]/T/#t
> 
> Thanks,
> 
> Steven
> >     }
> >   
> >     if (ret < 0)
> > diff --git a/security/integrity/ima/ima_queue.c 
> > b/security/integrity/ima/ima_queue.c
> > index f5c18acfbc43..4fb557d61a88 100644
> > --- a/security/integrity/ima/ima_queue.c
> > +++ b/security/integrity/ima/ima_queue.c
> > @@ -371,6 +371,76 @@ int ima_queue_staged_delete_all(void)
> >     return 0;
> >   }
> >   
> > +int ima_queue_staged_delete_partial(unsigned long req_value)
> > +{
> > +   unsigned long req_value_copy = req_value;
> > +   unsigned long size_to_remove = 0, num_to_remove = 0;
> > +   struct list_head *cut_pos = NULL;
> > +   LIST_HEAD(ima_measurements_trim);
> > +   struct ima_queue_entry *qe;
> > +   int ret = 0;
> > +
> > +   /*
> > +    * Safe walk (no concurrent write), not under ima_extend_list_mutex
> > +    * for performance reasons.
> > +    */
> > +   list_for_each_entry(qe, &ima_measurements_staged, later) {
> > +           size_to_remove += get_binary_runtime_size(qe->entry);
> > +           num_to_remove++;
> > +
> > +           if (--req_value_copy == 0) {
> > +                   /* qe->later always points to a valid list entry. */
> > +                   cut_pos = &qe->later;
> > +                   break;
> > +           }
> > +   }
> > +
> > +   /* Nothing to remove, undoing staging. */
> > +   if (req_value_copy > 0) {
> > +           size_to_remove = 0;
> > +           num_to_remove = 0;
> > +           ret = -ENOENT;
> > +   }
> > +
> > +   mutex_lock(&ima_extend_list_mutex);
> > +   if (list_empty(&ima_measurements_staged)) {
> > +           mutex_unlock(&ima_extend_list_mutex);
> > +           return -ENOENT;
> > +   }
> > +
> > +   if (cut_pos != NULL)
> > +           /*
> > +            * ima_dump_measurement_list() does not modify the list,
> > +            * cut_pos remains the same even if it was computed before
> > +            * the lock.
> > +            */
> > +           __list_cut_position(&ima_measurements_trim,
> > +                               &ima_measurements_staged, cut_pos);
> > +
> > +   atomic_long_sub(num_to_remove, &ima_num_entries[BINARY_STAGED]);
> > +   atomic_long_add(atomic_long_read(&ima_num_entries[BINARY_STAGED]),
> > +                   &ima_num_entries[BINARY]);
> > +   atomic_long_set(&ima_num_entries[BINARY_STAGED], 0);
> > +
> > +   if (IS_ENABLED(CONFIG_IMA_KEXEC)) {
> > +           binary_runtime_size[BINARY_STAGED] -= size_to_remove;
> > +           binary_runtime_size[BINARY] +=
> > +                                   binary_runtime_size[BINARY_STAGED];
> > +           binary_runtime_size[BINARY_STAGED] = 0;
> > +   }
> > +
> > +   /*
> > +    * Splice (prepend) any remaining non-deleted staged entries to the
> > +    * active list (RCU not needed, there cannot be concurrent readers).
> > +    */
> > +   list_splice(&ima_measurements_staged, &ima_measurements);
> > +   INIT_LIST_HEAD(&ima_measurements_staged);
> > +   mutex_unlock(&ima_extend_list_mutex);
> > +
> > +   ima_queue_delete(&ima_measurements_trim, false);
> > +   return ret;
> > +}
> > +
> >   static void ima_queue_delete(struct list_head *head, bool flush_htable)
> >   {
> >     struct ima_queue_entry *qe, *qe_tmp;
> 


Reply via email to