Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-29 Thread Julien Grall

Hi Bertrand,

On 29/04/2024 08:20, Bertrand Marquis wrote:

 From the comment in sched.h:
/*
  * rcu_lock_domain_by_id() is more efficient than get_domain_by_id().
  * This is the preferred function if the returned domain reference
  * is short lived,  but it cannot be used if the domain reference needs
  * to be kept beyond the current scope (e.g., across a softirq).
  * The returned domain reference must be discarded using rcu_unlock_domain().
  */

Now the question of short lived should be challenged but I do not think we can
consider the current code as "long lived".


Actually, I am not entirely sure you can use put_domain() in interrupt 
context. If you look at the implementation of domain_destroy() it takes 
a spin lock without disabling the interrupts.


The same spinlock is taken in domain_create(). So there is a potential 
deadlock.


Which means the decision between the two is not only about liveness.

Cheers,

--
Julien Grall



Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-29 Thread Jens Wiklander
Hi Julien,

On Fri, Apr 26, 2024 at 7:58 PM Julien Grall  wrote:
>
> Hi Jens,
>
> On 26/04/2024 09:47, Jens Wiklander wrote:
> > +static void notif_irq_enable(void *info)
> > +{
> > +struct notif_irq_info *irq_info = info;
> > +
> > +irq_info->ret = setup_irq(irq_info->irq, 0, irq_info->action);
> In v2, you were using request_irq(). But now you seem to be open-coding
> it. Can you explain why?

It's because request_irq() does a memory allocation that can't be done
in interrupt context.
>
> > +if ( irq_info->ret )
> > +printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n",
> > +   irq_info->irq, irq_info->ret);
> > +}
> > +
> > +void ffa_notif_init(void)
> > +{
> > +const struct arm_smccc_1_2_regs arg = {
> > +.a0 = FFA_FEATURES,
> > +.a1 = FFA_FEATURE_SCHEDULE_RECV_INTR,
> > +};
> > +struct notif_irq_info irq_info = { };
> > +struct arm_smccc_1_2_regs resp;
> > +unsigned int cpu;
> > +
> > +arm_smccc_1_2_smc(, );
> > +if ( resp.a0 != FFA_SUCCESS_32 )
> > +return;
> > +
> > +irq_info.irq = resp.a2;
> > +if ( irq_info.irq < GIC_SGI_STATIC_MAX || irq_info.irq >= NR_GIC_SGI )
> > +{
> > +printk(XENLOG_ERR "ffa: notification initialization failed: 
> > conflicting SGI %u\n",
> > +   irq_info.irq);
> > +return;
> > +}
> > +
> > +/*
> > + * SGIs are per-CPU so we must enable the IRQ on each CPU. We use an
> > + * IPI to call notif_irq_enable() on each CPU including the current
> > + * CPU. The struct irqaction is preallocated since we can't allocate
> > + * memory while in interrupt context.
> > + */
> > +for_each_online_cpu(cpu)
> Even though we currently don't support CPU hotplug, you want to add a
> CPU Notifier to also register the IRQ when a CPU is onlined
> ffa_notif_init().
>
> For an example, see time.c. We may also want to consider to enable TEE
> in presmp_initcalls() so we don't need to have a for_each_online_cpu().

I was considering that too. I'll update the code.

Thanks,
Jens



Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-29 Thread Jens Wiklander
Hi Bertrand,

On Mon, Apr 29, 2024 at 9:20 AM Bertrand Marquis
 wrote:
[...]
> >> +static void notif_irq_handler(int irq, void *data)
> >> +{
> >> +const struct arm_smccc_1_2_regs arg = {
> >> +.a0 = FFA_NOTIFICATION_INFO_GET_64,
> >> +};
> >> +struct arm_smccc_1_2_regs resp;
> >> +unsigned int id_pos;
> >> +unsigned int list_count;
> >> +uint64_t ids_count;
> >> +unsigned int n;
> >> +int32_t res;
> >> +
> >> +do {
> >> +arm_smccc_1_2_smc(, );
> >> +res = ffa_get_ret_code();
> >> +if ( res )
> >> +{
> >> +if ( res != FFA_RET_NO_DATA )
> >> +printk(XENLOG_ERR "ffa: notification info get failed: 
> >> error %d\n",
> >> +   res);
> >> +return;
> >> +}
> >> +
> >> +ids_count = resp.a2 >> FFA_NOTIF_INFO_GET_ID_LIST_SHIFT;
> >> +list_count = ( resp.a2 >> FFA_NOTIF_INFO_GET_ID_COUNT_SHIFT ) &
> >> + FFA_NOTIF_INFO_GET_ID_COUNT_MASK;
> >> +
> >> +id_pos = 0;
> >> +for ( n = 0; n < list_count; n++ )
> >> +{
> >> +unsigned int count = ((ids_count >> 2 * n) & 0x3) + 1;
> >> +struct domain *d;
> >> +
> >> +d = ffa_get_domain_by_vm_id(get_id_from_resp(, id_pos));
> >
> > Thinking a bit more about the question from Bertrand about get_domain_id() 
> > vs rcu_lock_domain_by_id(). I am actually not sure whether either are ok 
> > here.
> >
> > If I am not mistaken, d->arch.tee will be freed as part of the domain 
> > teardown process. This means you can have the following scenario:
> >
> > CPU0: ffa_get_domain_by_vm_id() (return the domain as it is alive)
> >
> > CPU1: call domain_kill()
> > CPU1: teardown is called, free d->arch.tee (the pointer is not set to NULL)
> >
> > d->arch.tee is now a dangling pointer
> >
> > CPU0: access d->arch.tee
> >
> > This implies you may need to gain a global lock (I don't have a better idea 
> > so far) to protect the IRQ handler against domains teardown.
> >
> > Did I miss anything?
>
> I think you are right which is why I was thinking to use 
> rcu_lock_live_remote_domain_by_id to only get a reference
> to the domain if it is not dying.
>
> From the comment in sched.h:
> /*
>  * rcu_lock_domain_by_id() is more efficient than get_domain_by_id().
>  * This is the preferred function if the returned domain reference
>  * is short lived,  but it cannot be used if the domain reference needs
>  * to be kept beyond the current scope (e.g., across a softirq).
>  * The returned domain reference must be discarded using rcu_unlock_domain().
>  */
>
> Now the question of short lived should be challenged but I do not think we can
> consider the current code as "long lived".
>
> It would be a good idea to start a mailing list thread discussion with other
> maintainers on the subject to confirm.
> @Jens: as i will be off for some time, would you mind doing it ?

Sure, first I'll send out a new patch set with the current comments
addressed. I'll update to use rcu_lock_live_remote_domain_by_id() both
because it makes more sense than the current code, and also to have a
good base for the discussion.

Thanks,
Jens



Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-29 Thread Jens Wiklander
Hi Julien,

On Fri, Apr 26, 2024 at 9:07 PM Julien Grall  wrote:
>
> Hi Jens,
>
> On 26/04/2024 09:47, Jens Wiklander wrote:
> > diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> > index d7306aa64d50..5224898265a5 100644
> > --- a/xen/arch/arm/irq.c
> > +++ b/xen/arch/arm/irq.c
> > @@ -155,7 +155,7 @@ void __init init_IRQ(void)
> >   {
> >   /* SGIs are always edge-triggered */
> >   if ( irq < NR_GIC_SGI )
> > -local_irqs_type[irq] = DT_IRQ_TYPE_EDGE_RISING;
> > +local_irqs_type[irq] = IRQ_TYPE_EDGE_RISING;
>
> This changes seems unrelated to this patch. This wants to be separate
> (if you actually intended to change it).

I'm sorry, my bad. I meant for this to go into "xen/arm: allow
dynamically assigned SGI handlers".

>
> >   else
> >   local_irqs_type[irq] = IRQ_TYPE_INVALID;
> >   }
> > diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
> > index f0112a2f922d..7c0f46f7f446 100644
> > --- a/xen/arch/arm/tee/Makefile
> > +++ b/xen/arch/arm/tee/Makefile
> > @@ -2,5 +2,6 @@ obj-$(CONFIG_FFA) += ffa.o
> >   obj-$(CONFIG_FFA) += ffa_shm.o
> >   obj-$(CONFIG_FFA) += ffa_partinfo.o
> >   obj-$(CONFIG_FFA) += ffa_rxtx.o
> > +obj-$(CONFIG_FFA) += ffa_notif.o
> >   obj-y += tee.o
> >   obj-$(CONFIG_OPTEE) += optee.o
> > diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> > index 5209612963e1..912e905a27bd 100644
> > --- a/xen/arch/arm/tee/ffa.c
> > +++ b/xen/arch/arm/tee/ffa.c
> > @@ -39,6 +39,9 @@
> >*   - at most 32 shared memory regions per guest
> >* o FFA_MSG_SEND_DIRECT_REQ:
> >*   - only supported from a VM to an SP
> > + * o FFA_NOTIFICATION_*:
> > + *   - only supports global notifications, that is, per vCPU notifications
> > + * are not supported
> >*
> >* There are some large locked sections with ffa_tx_buffer_lock and
> >* ffa_rx_buffer_lock. Especially the ffa_tx_buffer_lock spinlock used
> > @@ -194,6 +197,8 @@ out:
> >
> >   static void handle_features(struct cpu_user_regs *regs)
> >   {
> > +struct domain *d = current->domain;
> > +struct ffa_ctx *ctx = d->arch.tee;
> >   uint32_t a1 = get_user_reg(regs, 1);
> >   unsigned int n;
> >
> > @@ -240,6 +245,30 @@ static void handle_features(struct cpu_user_regs *regs)
> >   BUILD_BUG_ON(PAGE_SIZE != FFA_PAGE_SIZE);
> >   ffa_set_regs_success(regs, 0, 0);
> >   break;
> > +case FFA_FEATURE_NOTIF_PEND_INTR:
> > +if ( ctx->notif.enabled )
> > +ffa_set_regs_success(regs, GUEST_FFA_NOTIF_PEND_INTR_ID, 0);
> > +else
> > +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
> > +break;
> > +case FFA_FEATURE_SCHEDULE_RECV_INTR:
> > +if ( ctx->notif.enabled )
> > +ffa_set_regs_success(regs, GUEST_FFA_SCHEDULE_RECV_INTR_ID, 0);
> > +else
> > +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
> > +break;
> > +
> > +case FFA_NOTIFICATION_BIND:
> > +case FFA_NOTIFICATION_UNBIND:
> > +case FFA_NOTIFICATION_GET:
> > +case FFA_NOTIFICATION_SET:
> > +case FFA_NOTIFICATION_INFO_GET_32:
> > +case FFA_NOTIFICATION_INFO_GET_64:
> > +if ( ctx->notif.enabled )
> > +ffa_set_regs_success(regs, 0, 0);
> > +else
> > +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
> > +break;
> >   default:
> >   ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
> >   break;
> > @@ -305,6 +334,22 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
> >get_user_reg(regs, 
> > 1)),
> >  get_user_reg(regs, 3));
> >   break;
> > +case FFA_NOTIFICATION_BIND:
> > +e = ffa_handle_notification_bind(regs);
> > +break;
> > +case FFA_NOTIFICATION_UNBIND:
> > +e = ffa_handle_notification_unbind(regs);
> > +break;
> > +case FFA_NOTIFICATION_INFO_GET_32:
> > +case FFA_NOTIFICATION_INFO_GET_64:
> > +ffa_handle_notification_info_get(regs);
> > +return true;
> > +case FFA_NOTIFICATION_GET:
> > +ffa_handle_notification_get(regs);
> > +return true;
> > +case FFA_NOTIFICATION_SET:
> > +e = ffa_handle_notification_set(regs);
> > +break;
> >
> >   default:
> >   gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
> > @@ -322,6 +367,7 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
> >   static int ffa_domain_init(struct domain *d)
> >   {
> >   struct ffa_ctx *ctx;
> > +int ret;
> >
> >   if ( !ffa_version )
> >   return -ENODEV;
> > @@ -345,10 +391,11 @@ static int ffa_domain_init(struct domain *d)
> >* error, so no need for cleanup in this function.
> >*/
> >
> > -if ( !ffa_partinfo_domain_init(d) )
> > -return -EIO;
> > +ret = ffa_partinfo_domain_init(d);
> > +if ( ret )
> 

Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-29 Thread Bertrand Marquis
Hi Julien,

> On 26 Apr 2024, at 21:07, Julien Grall  wrote:
> 
> Hi Jens,
> 
> On 26/04/2024 09:47, Jens Wiklander wrote:
>> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
>> index d7306aa64d50..5224898265a5 100644
>> --- a/xen/arch/arm/irq.c
>> +++ b/xen/arch/arm/irq.c
>> @@ -155,7 +155,7 @@ void __init init_IRQ(void)
>>  {
>>  /* SGIs are always edge-triggered */
>>  if ( irq < NR_GIC_SGI )
>> -local_irqs_type[irq] = DT_IRQ_TYPE_EDGE_RISING;
>> +local_irqs_type[irq] = IRQ_TYPE_EDGE_RISING;
> 
> This changes seems unrelated to this patch. This wants to be separate (if you 
> actually intended to change it).
> 
>>  else
>>  local_irqs_type[irq] = IRQ_TYPE_INVALID;
>>  }
>> diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
>> index f0112a2f922d..7c0f46f7f446 100644
>> --- a/xen/arch/arm/tee/Makefile
>> +++ b/xen/arch/arm/tee/Makefile
>> @@ -2,5 +2,6 @@ obj-$(CONFIG_FFA) += ffa.o
>>  obj-$(CONFIG_FFA) += ffa_shm.o
>>  obj-$(CONFIG_FFA) += ffa_partinfo.o
>>  obj-$(CONFIG_FFA) += ffa_rxtx.o
>> +obj-$(CONFIG_FFA) += ffa_notif.o
>>  obj-y += tee.o
>>  obj-$(CONFIG_OPTEE) += optee.o
>> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
>> index 5209612963e1..912e905a27bd 100644
>> --- a/xen/arch/arm/tee/ffa.c
>> +++ b/xen/arch/arm/tee/ffa.c
>> @@ -39,6 +39,9 @@
>>   *   - at most 32 shared memory regions per guest
>>   * o FFA_MSG_SEND_DIRECT_REQ:
>>   *   - only supported from a VM to an SP
>> + * o FFA_NOTIFICATION_*:
>> + *   - only supports global notifications, that is, per vCPU notifications
>> + * are not supported
>>   *
>>   * There are some large locked sections with ffa_tx_buffer_lock and
>>   * ffa_rx_buffer_lock. Especially the ffa_tx_buffer_lock spinlock used
>> @@ -194,6 +197,8 @@ out:
>>static void handle_features(struct cpu_user_regs *regs)
>>  {
>> +struct domain *d = current->domain;
>> +struct ffa_ctx *ctx = d->arch.tee;
>>  uint32_t a1 = get_user_reg(regs, 1);
>>  unsigned int n;
>>  @@ -240,6 +245,30 @@ static void handle_features(struct cpu_user_regs *regs)
>>  BUILD_BUG_ON(PAGE_SIZE != FFA_PAGE_SIZE);
>>  ffa_set_regs_success(regs, 0, 0);
>>  break;
>> +case FFA_FEATURE_NOTIF_PEND_INTR:
>> +if ( ctx->notif.enabled )
>> +ffa_set_regs_success(regs, GUEST_FFA_NOTIF_PEND_INTR_ID, 0);
>> +else
>> +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
>> +break;
>> +case FFA_FEATURE_SCHEDULE_RECV_INTR:
>> +if ( ctx->notif.enabled )
>> +ffa_set_regs_success(regs, GUEST_FFA_SCHEDULE_RECV_INTR_ID, 0);
>> +else
>> +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
>> +break;
>> +
>> +case FFA_NOTIFICATION_BIND:
>> +case FFA_NOTIFICATION_UNBIND:
>> +case FFA_NOTIFICATION_GET:
>> +case FFA_NOTIFICATION_SET:
>> +case FFA_NOTIFICATION_INFO_GET_32:
>> +case FFA_NOTIFICATION_INFO_GET_64:
>> +if ( ctx->notif.enabled )
>> +ffa_set_regs_success(regs, 0, 0);
>> +else
>> +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
>> +break;
>>  default:
>>  ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
>>  break;
>> @@ -305,6 +334,22 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
>>   get_user_reg(regs, 1)),
>> get_user_reg(regs, 3));
>>  break;
>> +case FFA_NOTIFICATION_BIND:
>> +e = ffa_handle_notification_bind(regs);
>> +break;
>> +case FFA_NOTIFICATION_UNBIND:
>> +e = ffa_handle_notification_unbind(regs);
>> +break;
>> +case FFA_NOTIFICATION_INFO_GET_32:
>> +case FFA_NOTIFICATION_INFO_GET_64:
>> +ffa_handle_notification_info_get(regs);
>> +return true;
>> +case FFA_NOTIFICATION_GET:
>> +ffa_handle_notification_get(regs);
>> +return true;
>> +case FFA_NOTIFICATION_SET:
>> +e = ffa_handle_notification_set(regs);
>> +break;
>>default:
>>  gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
>> @@ -322,6 +367,7 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
>>  static int ffa_domain_init(struct domain *d)
>>  {
>>  struct ffa_ctx *ctx;
>> +int ret;
>>if ( !ffa_version )
>>  return -ENODEV;
>> @@ -345,10 +391,11 @@ static int ffa_domain_init(struct domain *d)
>>   * error, so no need for cleanup in this function.
>>   */
>>  -if ( !ffa_partinfo_domain_init(d) )
>> -return -EIO;
>> +ret = ffa_partinfo_domain_init(d);
>> +if ( ret )
>> +return ret;
>>  -return 0;
>> +return ffa_notif_domain_init(d);
>>  }
>>static void ffa_domain_teardown_continue(struct ffa_ctx *ctx, bool 
>> first_time)
>> @@ -423,6 +470,7 @@ static int ffa_domain_teardown(struct domain *d)
>>  return 

Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-26 Thread Julien Grall

Hi Jens,

On 26/04/2024 09:47, Jens Wiklander wrote:

diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index d7306aa64d50..5224898265a5 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -155,7 +155,7 @@ void __init init_IRQ(void)
  {
  /* SGIs are always edge-triggered */
  if ( irq < NR_GIC_SGI )
-local_irqs_type[irq] = DT_IRQ_TYPE_EDGE_RISING;
+local_irqs_type[irq] = IRQ_TYPE_EDGE_RISING;


This changes seems unrelated to this patch. This wants to be separate 
(if you actually intended to change it).



  else
  local_irqs_type[irq] = IRQ_TYPE_INVALID;
  }
diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
index f0112a2f922d..7c0f46f7f446 100644
--- a/xen/arch/arm/tee/Makefile
+++ b/xen/arch/arm/tee/Makefile
@@ -2,5 +2,6 @@ obj-$(CONFIG_FFA) += ffa.o
  obj-$(CONFIG_FFA) += ffa_shm.o
  obj-$(CONFIG_FFA) += ffa_partinfo.o
  obj-$(CONFIG_FFA) += ffa_rxtx.o
+obj-$(CONFIG_FFA) += ffa_notif.o
  obj-y += tee.o
  obj-$(CONFIG_OPTEE) += optee.o
diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 5209612963e1..912e905a27bd 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -39,6 +39,9 @@
   *   - at most 32 shared memory regions per guest
   * o FFA_MSG_SEND_DIRECT_REQ:
   *   - only supported from a VM to an SP
+ * o FFA_NOTIFICATION_*:
+ *   - only supports global notifications, that is, per vCPU notifications
+ * are not supported
   *
   * There are some large locked sections with ffa_tx_buffer_lock and
   * ffa_rx_buffer_lock. Especially the ffa_tx_buffer_lock spinlock used
@@ -194,6 +197,8 @@ out:
  
  static void handle_features(struct cpu_user_regs *regs)

  {
+struct domain *d = current->domain;
+struct ffa_ctx *ctx = d->arch.tee;
  uint32_t a1 = get_user_reg(regs, 1);
  unsigned int n;
  
@@ -240,6 +245,30 @@ static void handle_features(struct cpu_user_regs *regs)

  BUILD_BUG_ON(PAGE_SIZE != FFA_PAGE_SIZE);
  ffa_set_regs_success(regs, 0, 0);
  break;
+case FFA_FEATURE_NOTIF_PEND_INTR:
+if ( ctx->notif.enabled )
+ffa_set_regs_success(regs, GUEST_FFA_NOTIF_PEND_INTR_ID, 0);
+else
+ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
+break;
+case FFA_FEATURE_SCHEDULE_RECV_INTR:
+if ( ctx->notif.enabled )
+ffa_set_regs_success(regs, GUEST_FFA_SCHEDULE_RECV_INTR_ID, 0);
+else
+ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
+break;
+
+case FFA_NOTIFICATION_BIND:
+case FFA_NOTIFICATION_UNBIND:
+case FFA_NOTIFICATION_GET:
+case FFA_NOTIFICATION_SET:
+case FFA_NOTIFICATION_INFO_GET_32:
+case FFA_NOTIFICATION_INFO_GET_64:
+if ( ctx->notif.enabled )
+ffa_set_regs_success(regs, 0, 0);
+else
+ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
+break;
  default:
  ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
  break;
@@ -305,6 +334,22 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
   get_user_reg(regs, 1)),
 get_user_reg(regs, 3));
  break;
+case FFA_NOTIFICATION_BIND:
+e = ffa_handle_notification_bind(regs);
+break;
+case FFA_NOTIFICATION_UNBIND:
+e = ffa_handle_notification_unbind(regs);
+break;
+case FFA_NOTIFICATION_INFO_GET_32:
+case FFA_NOTIFICATION_INFO_GET_64:
+ffa_handle_notification_info_get(regs);
+return true;
+case FFA_NOTIFICATION_GET:
+ffa_handle_notification_get(regs);
+return true;
+case FFA_NOTIFICATION_SET:
+e = ffa_handle_notification_set(regs);
+break;
  
  default:

  gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
@@ -322,6 +367,7 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
  static int ffa_domain_init(struct domain *d)
  {
  struct ffa_ctx *ctx;
+int ret;
  
  if ( !ffa_version )

  return -ENODEV;
@@ -345,10 +391,11 @@ static int ffa_domain_init(struct domain *d)
   * error, so no need for cleanup in this function.
   */
  
-if ( !ffa_partinfo_domain_init(d) )

-return -EIO;
+ret = ffa_partinfo_domain_init(d);
+if ( ret )
+return ret;
  
-return 0;

+return ffa_notif_domain_init(d);
  }
  
  static void ffa_domain_teardown_continue(struct ffa_ctx *ctx, bool first_time)

@@ -423,6 +470,7 @@ static int ffa_domain_teardown(struct domain *d)
  return 0;
  
  ffa_rxtx_domain_destroy(d);

+ffa_notif_domain_destroy(d);
  
  ffa_domain_teardown_continue(ctx, true /* first_time */);
  
@@ -502,6 +550,7 @@ static bool ffa_probe(void)

  if ( !ffa_partinfo_init() )
  goto err_rxtx_destroy;
  
+ffa_notif_init();

  INIT_LIST_HEAD(_teardown_head);
  init_timer(_teardown_timer, 

Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-26 Thread Julien Grall

Hi Bertrand,

On 26/04/2024 10:20, Bertrand Marquis wrote:

+static inline struct domain *ffa_get_domain_by_vm_id(uint16_t vm_id)
+{
+/* -1 to match ffa_get_vm_id() */
+return get_domain_by_id(vm_id - 1);
+}
+


I think there is a global discussion to have on using get_domain_by_vm_id
or rcu_lock_live_remote_domain_by_id to make sure the domain is not
dying when we try to do something with it.

It does not need to be done here as there are other places to adapt but
i wanted to raise the question.

I would like to know what you and also other maintainers think here.
@Julien/Michal/Stefano ?


Good question. I think the intention is get_domain_by_id() should be 
called when you need a reference for longer.


I would consider to involve the REST in this discussion.

Cheers,

--
Julien Grall



Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-26 Thread Julien Grall

Hi Jens,

On 26/04/2024 09:47, Jens Wiklander wrote:

+static void notif_irq_enable(void *info)
+{
+struct notif_irq_info *irq_info = info;
+
+irq_info->ret = setup_irq(irq_info->irq, 0, irq_info->action);
In v2, you were using request_irq(). But now you seem to be open-coding 
it. Can you explain why?



+if ( irq_info->ret )
+printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n",
+   irq_info->irq, irq_info->ret);
+}
+
+void ffa_notif_init(void)
+{
+const struct arm_smccc_1_2_regs arg = {
+.a0 = FFA_FEATURES,
+.a1 = FFA_FEATURE_SCHEDULE_RECV_INTR,
+};
+struct notif_irq_info irq_info = { };
+struct arm_smccc_1_2_regs resp;
+unsigned int cpu;
+
+arm_smccc_1_2_smc(, );
+if ( resp.a0 != FFA_SUCCESS_32 )
+return;
+
+irq_info.irq = resp.a2;
+if ( irq_info.irq < GIC_SGI_STATIC_MAX || irq_info.irq >= NR_GIC_SGI )
+{
+printk(XENLOG_ERR "ffa: notification initialization failed: conflicting SGI 
%u\n",
+   irq_info.irq);
+return;
+}
+
+/*
+ * SGIs are per-CPU so we must enable the IRQ on each CPU. We use an
+ * IPI to call notif_irq_enable() on each CPU including the current
+ * CPU. The struct irqaction is preallocated since we can't allocate
+ * memory while in interrupt context.
+ */
+for_each_online_cpu(cpu)
Even though we currently don't support CPU hotplug, you want to add a 
CPU Notifier to also register the IRQ when a CPU is onlined 
ffa_notif_init().


For an example, see time.c. We may also want to consider to enable TEE 
in presmp_initcalls() so we don't need to have a for_each_online_cpu().


Cheers,

--
Julien Grall



Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-26 Thread Bertrand Marquis
Hi Jens,

> On 26 Apr 2024, at 15:02, Jens Wiklander  wrote:
> 
> On Fri, Apr 26, 2024 at 2:41 PM Bertrand Marquis
>  wrote:
>> 
>> Hi Jens,
>> 
>>> On 26 Apr 2024, at 14:32, Jens Wiklander  wrote:
>>> 
>>> Hi Bertrand,
>>> 
>>> On Fri, Apr 26, 2024 at 2:19 PM Bertrand Marquis
>>>  wrote:
 
 Hi Jens,
 
> On 26 Apr 2024, at 14:11, Jens Wiklander  
> wrote:
> 
> Hi Bertrand,
> 
> On Fri, Apr 26, 2024 at 11:20 AM Bertrand Marquis
>  wrote:
>> 
>> Hi Jens,
>> 
>>> On 26 Apr 2024, at 10:47, Jens Wiklander  
>>> wrote:
>>> 
>>> [...]
>>> +struct notif_irq_info {
>>> +unsigned int irq;
>>> +int ret;
>>> +struct irqaction *action;
>>> +};
>>> +
>>> +static void notif_irq_enable(void *info)
>>> +{
>>> +struct notif_irq_info *irq_info = info;
>>> +
>>> +irq_info->ret = setup_irq(irq_info->irq, 0, irq_info->action);
>>> +if ( irq_info->ret )
>>> +printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n",
>>> +   irq_info->irq, irq_info->ret);
>>> +}
>>> +
>>> +void ffa_notif_init(void)
>>> +{
>>> +const struct arm_smccc_1_2_regs arg = {
>>> +.a0 = FFA_FEATURES,
>>> +.a1 = FFA_FEATURE_SCHEDULE_RECV_INTR,
>>> +};
>>> +struct notif_irq_info irq_info = { };
>>> +struct arm_smccc_1_2_regs resp;
>>> +unsigned int cpu;
>>> +
>>> +arm_smccc_1_2_smc(, );
>>> +if ( resp.a0 != FFA_SUCCESS_32 )
>>> +return;
>>> +
>>> +irq_info.irq = resp.a2;
>>> +if ( irq_info.irq < GIC_SGI_STATIC_MAX || irq_info.irq >= 
>>> NR_GIC_SGI )
>>> +{
>>> +printk(XENLOG_ERR "ffa: notification initialization failed: 
>>> conflicting SGI %u\n",
>>> +   irq_info.irq);
>>> +return;
>>> +}
>>> +
>>> +/*
>>> + * SGIs are per-CPU so we must enable the IRQ on each CPU. We use 
>>> an
>>> + * IPI to call notif_irq_enable() on each CPU including the current
>>> + * CPU. The struct irqaction is preallocated since we can't 
>>> allocate
>>> + * memory while in interrupt context.
>>> + */
>>> +for_each_online_cpu(cpu)
>>> +{
>>> +irq_info.action = xmalloc(struct irqaction);
>> 
>> You allocate one action per cpu but you have only one action pointer in 
>> your structure
>> which means you will overload the previously allocated one and lose 
>> track.
>> 
>> You should have a table of actions in your structure instead unless one 
>> action is
>> enough and can be reused on all cpus and in this case you should move 
>> out of
>> your loop the allocation part.
> 
> That shouldn't be needed because this is done in sequence only one CPU
> at a time.
 
 Sorry i do not understand here.
 You have a loop over each online cpu and on each loop you are assigning
 irq_info.action with a newly allocated struct irqaction so you are in 
 practice
 overloading on cpu 2 the action that was allocated for cpu 1.
 
 What do you mean by sequence here ?
 
>>> 
>>> My understanding is that for_each_online_cpu(cpu) loops over each cpu,
>>> one at a time. The call
>>> on_selected_cpus(cpumask_of(cpu), notif_irq_enable, _info, 1);
>>> returns after notif_irq_enable() has returned on the CPU in question
>>> thanks to the "1" (wait) parameter. So once it has returned _info
>>> isn't used by the other CPU any longer and we can assign a new value
>>> to irq_info.action.
>> 
>> Right so you loose track of what was assigned so you are not able to
>> free it.
>> If that is wanted then why saving this in irq.action as you will only have
>> there the one allocated for the last online cpu.
> 
> Wouldn't release_irq() free it? An error here is unlikely, but we may
> be left with a few installed struct irqaction if it occurs. I can add
> a more elaborate error path if it's worth the added complexity.

I think just add a comment saying that the irqaction will be freed
upon release_irq so we do not keep a reference to it or something
like that and this will be ok.

The code is in fact a bit misleading because the irqaction is used
inside the function called on other cores through the IPI and there
you actually pass the action. Your structure is only there to transport
the information for the IPI handler.
So please add a comment on top of the notif_irq_info to say that
this structure is used to pass information to and back the notif_irq_enable
executed using an IPI on other cores.

Cheers
Bertrand


> 
> Thanks,
> Jens




Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-26 Thread Jens Wiklander
On Fri, Apr 26, 2024 at 2:41 PM Bertrand Marquis
 wrote:
>
> Hi Jens,
>
> > On 26 Apr 2024, at 14:32, Jens Wiklander  wrote:
> >
> > Hi Bertrand,
> >
> > On Fri, Apr 26, 2024 at 2:19 PM Bertrand Marquis
> >  wrote:
> >>
> >> Hi Jens,
> >>
> >>> On 26 Apr 2024, at 14:11, Jens Wiklander  
> >>> wrote:
> >>>
> >>> Hi Bertrand,
> >>>
> >>> On Fri, Apr 26, 2024 at 11:20 AM Bertrand Marquis
> >>>  wrote:
> 
>  Hi Jens,
> 
> > On 26 Apr 2024, at 10:47, Jens Wiklander  
> > wrote:
> >
> > [...]
> > +struct notif_irq_info {
> > +unsigned int irq;
> > +int ret;
> > +struct irqaction *action;
> > +};
> > +
> > +static void notif_irq_enable(void *info)
> > +{
> > +struct notif_irq_info *irq_info = info;
> > +
> > +irq_info->ret = setup_irq(irq_info->irq, 0, irq_info->action);
> > +if ( irq_info->ret )
> > +printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n",
> > +   irq_info->irq, irq_info->ret);
> > +}
> > +
> > +void ffa_notif_init(void)
> > +{
> > +const struct arm_smccc_1_2_regs arg = {
> > +.a0 = FFA_FEATURES,
> > +.a1 = FFA_FEATURE_SCHEDULE_RECV_INTR,
> > +};
> > +struct notif_irq_info irq_info = { };
> > +struct arm_smccc_1_2_regs resp;
> > +unsigned int cpu;
> > +
> > +arm_smccc_1_2_smc(, );
> > +if ( resp.a0 != FFA_SUCCESS_32 )
> > +return;
> > +
> > +irq_info.irq = resp.a2;
> > +if ( irq_info.irq < GIC_SGI_STATIC_MAX || irq_info.irq >= 
> > NR_GIC_SGI )
> > +{
> > +printk(XENLOG_ERR "ffa: notification initialization failed: 
> > conflicting SGI %u\n",
> > +   irq_info.irq);
> > +return;
> > +}
> > +
> > +/*
> > + * SGIs are per-CPU so we must enable the IRQ on each CPU. We use 
> > an
> > + * IPI to call notif_irq_enable() on each CPU including the current
> > + * CPU. The struct irqaction is preallocated since we can't 
> > allocate
> > + * memory while in interrupt context.
> > + */
> > +for_each_online_cpu(cpu)
> > +{
> > +irq_info.action = xmalloc(struct irqaction);
> 
>  You allocate one action per cpu but you have only one action pointer in 
>  your structure
>  which means you will overload the previously allocated one and lose 
>  track.
> 
>  You should have a table of actions in your structure instead unless one 
>  action is
>  enough and can be reused on all cpus and in this case you should move 
>  out of
>  your loop the allocation part.
> >>>
> >>> That shouldn't be needed because this is done in sequence only one CPU
> >>> at a time.
> >>
> >> Sorry i do not understand here.
> >> You have a loop over each online cpu and on each loop you are assigning
> >> irq_info.action with a newly allocated struct irqaction so you are in 
> >> practice
> >> overloading on cpu 2 the action that was allocated for cpu 1.
> >>
> >> What do you mean by sequence here ?
> >>
> >
> > My understanding is that for_each_online_cpu(cpu) loops over each cpu,
> > one at a time. The call
> > on_selected_cpus(cpumask_of(cpu), notif_irq_enable, _info, 1);
> > returns after notif_irq_enable() has returned on the CPU in question
> > thanks to the "1" (wait) parameter. So once it has returned _info
> > isn't used by the other CPU any longer and we can assign a new value
> > to irq_info.action.
>
> Right so you loose track of what was assigned so you are not able to
> free it.
> If that is wanted then why saving this in irq.action as you will only have
> there the one allocated for the last online cpu.

Wouldn't release_irq() free it? An error here is unlikely, but we may
be left with a few installed struct irqaction if it occurs. I can add
a more elaborate error path if it's worth the added complexity.

Thanks,
Jens



Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-26 Thread Bertrand Marquis
Hi Jens,

> On 26 Apr 2024, at 14:32, Jens Wiklander  wrote:
> 
> Hi Bertrand,
> 
> On Fri, Apr 26, 2024 at 2:19 PM Bertrand Marquis
>  wrote:
>> 
>> Hi Jens,
>> 
>>> On 26 Apr 2024, at 14:11, Jens Wiklander  wrote:
>>> 
>>> Hi Bertrand,
>>> 
>>> On Fri, Apr 26, 2024 at 11:20 AM Bertrand Marquis
>>>  wrote:
 
 Hi Jens,
 
> On 26 Apr 2024, at 10:47, Jens Wiklander  
> wrote:
> 
> [...]
> +struct notif_irq_info {
> +unsigned int irq;
> +int ret;
> +struct irqaction *action;
> +};
> +
> +static void notif_irq_enable(void *info)
> +{
> +struct notif_irq_info *irq_info = info;
> +
> +irq_info->ret = setup_irq(irq_info->irq, 0, irq_info->action);
> +if ( irq_info->ret )
> +printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n",
> +   irq_info->irq, irq_info->ret);
> +}
> +
> +void ffa_notif_init(void)
> +{
> +const struct arm_smccc_1_2_regs arg = {
> +.a0 = FFA_FEATURES,
> +.a1 = FFA_FEATURE_SCHEDULE_RECV_INTR,
> +};
> +struct notif_irq_info irq_info = { };
> +struct arm_smccc_1_2_regs resp;
> +unsigned int cpu;
> +
> +arm_smccc_1_2_smc(, );
> +if ( resp.a0 != FFA_SUCCESS_32 )
> +return;
> +
> +irq_info.irq = resp.a2;
> +if ( irq_info.irq < GIC_SGI_STATIC_MAX || irq_info.irq >= NR_GIC_SGI 
> )
> +{
> +printk(XENLOG_ERR "ffa: notification initialization failed: 
> conflicting SGI %u\n",
> +   irq_info.irq);
> +return;
> +}
> +
> +/*
> + * SGIs are per-CPU so we must enable the IRQ on each CPU. We use an
> + * IPI to call notif_irq_enable() on each CPU including the current
> + * CPU. The struct irqaction is preallocated since we can't allocate
> + * memory while in interrupt context.
> + */
> +for_each_online_cpu(cpu)
> +{
> +irq_info.action = xmalloc(struct irqaction);
 
 You allocate one action per cpu but you have only one action pointer in 
 your structure
 which means you will overload the previously allocated one and lose track.
 
 You should have a table of actions in your structure instead unless one 
 action is
 enough and can be reused on all cpus and in this case you should move out 
 of
 your loop the allocation part.
>>> 
>>> That shouldn't be needed because this is done in sequence only one CPU
>>> at a time.
>> 
>> Sorry i do not understand here.
>> You have a loop over each online cpu and on each loop you are assigning
>> irq_info.action with a newly allocated struct irqaction so you are in 
>> practice
>> overloading on cpu 2 the action that was allocated for cpu 1.
>> 
>> What do you mean by sequence here ?
>> 
> 
> My understanding is that for_each_online_cpu(cpu) loops over each cpu,
> one at a time. The call
> on_selected_cpus(cpumask_of(cpu), notif_irq_enable, _info, 1);
> returns after notif_irq_enable() has returned on the CPU in question
> thanks to the "1" (wait) parameter. So once it has returned _info
> isn't used by the other CPU any longer and we can assign a new value
> to irq_info.action.

Right so you loose track of what was assigned so you are not able to
free it.
If that is wanted then why saving this in irq.action as you will only have
there the one allocated for the last online cpu.

Cheers
Bertrand

> 
> Thanks,
> Jens




Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-26 Thread Jens Wiklander
Hi Bertrand,

On Fri, Apr 26, 2024 at 2:19 PM Bertrand Marquis
 wrote:
>
> Hi Jens,
>
> > On 26 Apr 2024, at 14:11, Jens Wiklander  wrote:
> >
> > Hi Bertrand,
> >
> > On Fri, Apr 26, 2024 at 11:20 AM Bertrand Marquis
> >  wrote:
> >>
> >> Hi Jens,
> >>
> >>> On 26 Apr 2024, at 10:47, Jens Wiklander  
> >>> wrote:
> >>>
[...]
> >>> +struct notif_irq_info {
> >>> +unsigned int irq;
> >>> +int ret;
> >>> +struct irqaction *action;
> >>> +};
> >>> +
> >>> +static void notif_irq_enable(void *info)
> >>> +{
> >>> +struct notif_irq_info *irq_info = info;
> >>> +
> >>> +irq_info->ret = setup_irq(irq_info->irq, 0, irq_info->action);
> >>> +if ( irq_info->ret )
> >>> +printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n",
> >>> +   irq_info->irq, irq_info->ret);
> >>> +}
> >>> +
> >>> +void ffa_notif_init(void)
> >>> +{
> >>> +const struct arm_smccc_1_2_regs arg = {
> >>> +.a0 = FFA_FEATURES,
> >>> +.a1 = FFA_FEATURE_SCHEDULE_RECV_INTR,
> >>> +};
> >>> +struct notif_irq_info irq_info = { };
> >>> +struct arm_smccc_1_2_regs resp;
> >>> +unsigned int cpu;
> >>> +
> >>> +arm_smccc_1_2_smc(, );
> >>> +if ( resp.a0 != FFA_SUCCESS_32 )
> >>> +return;
> >>> +
> >>> +irq_info.irq = resp.a2;
> >>> +if ( irq_info.irq < GIC_SGI_STATIC_MAX || irq_info.irq >= NR_GIC_SGI 
> >>> )
> >>> +{
> >>> +printk(XENLOG_ERR "ffa: notification initialization failed: 
> >>> conflicting SGI %u\n",
> >>> +   irq_info.irq);
> >>> +return;
> >>> +}
> >>> +
> >>> +/*
> >>> + * SGIs are per-CPU so we must enable the IRQ on each CPU. We use an
> >>> + * IPI to call notif_irq_enable() on each CPU including the current
> >>> + * CPU. The struct irqaction is preallocated since we can't allocate
> >>> + * memory while in interrupt context.
> >>> + */
> >>> +for_each_online_cpu(cpu)
> >>> +{
> >>> +irq_info.action = xmalloc(struct irqaction);
> >>
> >> You allocate one action per cpu but you have only one action pointer in 
> >> your structure
> >> which means you will overload the previously allocated one and lose track.
> >>
> >> You should have a table of actions in your structure instead unless one 
> >> action is
> >> enough and can be reused on all cpus and in this case you should move out 
> >> of
> >> your loop the allocation part.
> >
> > That shouldn't be needed because this is done in sequence only one CPU
> > at a time.
>
> Sorry i do not understand here.
> You have a loop over each online cpu and on each loop you are assigning
> irq_info.action with a newly allocated struct irqaction so you are in practice
> overloading on cpu 2 the action that was allocated for cpu 1.
>
> What do you mean by sequence here ?
>

My understanding is that for_each_online_cpu(cpu) loops over each cpu,
one at a time. The call
on_selected_cpus(cpumask_of(cpu), notif_irq_enable, _info, 1);
returns after notif_irq_enable() has returned on the CPU in question
thanks to the "1" (wait) parameter. So once it has returned _info
isn't used by the other CPU any longer and we can assign a new value
to irq_info.action.

Thanks,
Jens



Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-26 Thread Bertrand Marquis
Hi Jens,

> On 26 Apr 2024, at 14:11, Jens Wiklander  wrote:
> 
> Hi Bertrand,
> 
> On Fri, Apr 26, 2024 at 11:20 AM Bertrand Marquis
>  wrote:
>> 
>> Hi Jens,
>> 
>>> On 26 Apr 2024, at 10:47, Jens Wiklander  wrote:
>>> 
>>> Add support for FF-A notifications, currently limited to an SP (Secure
>>> Partition) sending an asynchronous notification to a guest.
>>> 
>>> Guests and Xen itself are made aware of pending notifications with an
>>> interrupt. The interrupt handler retrieves the notifications using the
>>> FF-A ABI and deliver them to their destinations.
>>> 
>>> Update ffa_partinfo_domain_init() to return error code like
>>> ffa_notif_domain_init().
>>> 
>>> Signed-off-by: Jens Wiklander 
>>> ---
>>> v2->v3:
>>> - Add a GUEST_ prefix and move FFA_NOTIF_PEND_INTR_ID and
>>> FFA_SCHEDULE_RECV_INTR_ID to public/arch-arm.h
>>> - Register the Xen SRI handler on each CPU using on_selected_cpus() and
>>> setup_irq()
>>> - Check that the SGI ID retrieved with FFA_FEATURE_SCHEDULE_RECV_INTR
>>> doesn't conflict with static SGI handlers
>>> 
>>> v1->v2:
>>> - Addressing review comments
>>> - Change ffa_handle_notification_{bind,unbind,set}() to take struct
>>> cpu_user_regs *regs as argument.
>>> - Update ffa_partinfo_domain_init() and ffa_notif_domain_init() to return
>>> an error code.
>>> - Fixing a bug in handle_features() for FFA_FEATURE_SCHEDULE_RECV_INTR.
>>> ---
>>> xen/arch/arm/irq.c  |   2 +-
>>> xen/arch/arm/tee/Makefile   |   1 +
>>> xen/arch/arm/tee/ffa.c  |  55 -
>>> xen/arch/arm/tee/ffa_notif.c| 378 
>>> xen/arch/arm/tee/ffa_partinfo.c |   9 +-
>>> xen/arch/arm/tee/ffa_private.h  |  56 -
>>> xen/include/public/arch-arm.h   |  14 ++
>>> 7 files changed, 507 insertions(+), 8 deletions(-)
>>> create mode 100644 xen/arch/arm/tee/ffa_notif.c
>>> 
>>> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
>>> index d7306aa64d50..5224898265a5 100644
>>> --- a/xen/arch/arm/irq.c
>>> +++ b/xen/arch/arm/irq.c
>>> @@ -155,7 +155,7 @@ void __init init_IRQ(void)
>>>{
>>>/* SGIs are always edge-triggered */
>>>if ( irq < NR_GIC_SGI )
>>> -local_irqs_type[irq] = DT_IRQ_TYPE_EDGE_RISING;
>>> +local_irqs_type[irq] = IRQ_TYPE_EDGE_RISING;
>>>else
>>>local_irqs_type[irq] = IRQ_TYPE_INVALID;
>>>}
>>> diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
>>> index f0112a2f922d..7c0f46f7f446 100644
>>> --- a/xen/arch/arm/tee/Makefile
>>> +++ b/xen/arch/arm/tee/Makefile
>>> @@ -2,5 +2,6 @@ obj-$(CONFIG_FFA) += ffa.o
>>> obj-$(CONFIG_FFA) += ffa_shm.o
>>> obj-$(CONFIG_FFA) += ffa_partinfo.o
>>> obj-$(CONFIG_FFA) += ffa_rxtx.o
>>> +obj-$(CONFIG_FFA) += ffa_notif.o
>>> obj-y += tee.o
>>> obj-$(CONFIG_OPTEE) += optee.o
>>> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
>>> index 5209612963e1..912e905a27bd 100644
>>> --- a/xen/arch/arm/tee/ffa.c
>>> +++ b/xen/arch/arm/tee/ffa.c
>>> @@ -39,6 +39,9 @@
>>> *   - at most 32 shared memory regions per guest
>>> * o FFA_MSG_SEND_DIRECT_REQ:
>>> *   - only supported from a VM to an SP
>>> + * o FFA_NOTIFICATION_*:
>>> + *   - only supports global notifications, that is, per vCPU notifications
>>> + * are not supported
>>> *
>>> * There are some large locked sections with ffa_tx_buffer_lock and
>>> * ffa_rx_buffer_lock. Especially the ffa_tx_buffer_lock spinlock used
>>> @@ -194,6 +197,8 @@ out:
>>> 
>>> static void handle_features(struct cpu_user_regs *regs)
>>> {
>>> +struct domain *d = current->domain;
>>> +struct ffa_ctx *ctx = d->arch.tee;
>>>uint32_t a1 = get_user_reg(regs, 1);
>>>unsigned int n;
>>> 
>>> @@ -240,6 +245,30 @@ static void handle_features(struct cpu_user_regs *regs)
>>>BUILD_BUG_ON(PAGE_SIZE != FFA_PAGE_SIZE);
>>>ffa_set_regs_success(regs, 0, 0);
>>>break;
>>> +case FFA_FEATURE_NOTIF_PEND_INTR:
>>> +if ( ctx->notif.enabled )
>>> +ffa_set_regs_success(regs, GUEST_FFA_NOTIF_PEND_INTR_ID, 0);
>>> +else
>>> +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
>>> +break;
>>> +case FFA_FEATURE_SCHEDULE_RECV_INTR:
>>> +if ( ctx->notif.enabled )
>>> +ffa_set_regs_success(regs, GUEST_FFA_SCHEDULE_RECV_INTR_ID, 0);
>>> +else
>>> +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
>>> +break;
>>> +
>>> +case FFA_NOTIFICATION_BIND:
>>> +case FFA_NOTIFICATION_UNBIND:
>>> +case FFA_NOTIFICATION_GET:
>>> +case FFA_NOTIFICATION_SET:
>>> +case FFA_NOTIFICATION_INFO_GET_32:
>>> +case FFA_NOTIFICATION_INFO_GET_64:
>>> +if ( ctx->notif.enabled )
>>> +ffa_set_regs_success(regs, 0, 0);
>>> +else
>>> +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
>>> +break;
>>>default:
>>>ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
>>>break;
>>> @@ -305,6 +334,22 @@ static bool 

Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-26 Thread Jens Wiklander
Hi Bertrand,

On Fri, Apr 26, 2024 at 11:20 AM Bertrand Marquis
 wrote:
>
> Hi Jens,
>
> > On 26 Apr 2024, at 10:47, Jens Wiklander  wrote:
> >
> > Add support for FF-A notifications, currently limited to an SP (Secure
> > Partition) sending an asynchronous notification to a guest.
> >
> > Guests and Xen itself are made aware of pending notifications with an
> > interrupt. The interrupt handler retrieves the notifications using the
> > FF-A ABI and deliver them to their destinations.
> >
> > Update ffa_partinfo_domain_init() to return error code like
> > ffa_notif_domain_init().
> >
> > Signed-off-by: Jens Wiklander 
> > ---
> > v2->v3:
> > - Add a GUEST_ prefix and move FFA_NOTIF_PEND_INTR_ID and
> >  FFA_SCHEDULE_RECV_INTR_ID to public/arch-arm.h
> > - Register the Xen SRI handler on each CPU using on_selected_cpus() and
> >  setup_irq()
> > - Check that the SGI ID retrieved with FFA_FEATURE_SCHEDULE_RECV_INTR
> >  doesn't conflict with static SGI handlers
> >
> > v1->v2:
> > - Addressing review comments
> > - Change ffa_handle_notification_{bind,unbind,set}() to take struct
> >  cpu_user_regs *regs as argument.
> > - Update ffa_partinfo_domain_init() and ffa_notif_domain_init() to return
> >  an error code.
> > - Fixing a bug in handle_features() for FFA_FEATURE_SCHEDULE_RECV_INTR.
> > ---
> > xen/arch/arm/irq.c  |   2 +-
> > xen/arch/arm/tee/Makefile   |   1 +
> > xen/arch/arm/tee/ffa.c  |  55 -
> > xen/arch/arm/tee/ffa_notif.c| 378 
> > xen/arch/arm/tee/ffa_partinfo.c |   9 +-
> > xen/arch/arm/tee/ffa_private.h  |  56 -
> > xen/include/public/arch-arm.h   |  14 ++
> > 7 files changed, 507 insertions(+), 8 deletions(-)
> > create mode 100644 xen/arch/arm/tee/ffa_notif.c
> >
> > diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> > index d7306aa64d50..5224898265a5 100644
> > --- a/xen/arch/arm/irq.c
> > +++ b/xen/arch/arm/irq.c
> > @@ -155,7 +155,7 @@ void __init init_IRQ(void)
> > {
> > /* SGIs are always edge-triggered */
> > if ( irq < NR_GIC_SGI )
> > -local_irqs_type[irq] = DT_IRQ_TYPE_EDGE_RISING;
> > +local_irqs_type[irq] = IRQ_TYPE_EDGE_RISING;
> > else
> > local_irqs_type[irq] = IRQ_TYPE_INVALID;
> > }
> > diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
> > index f0112a2f922d..7c0f46f7f446 100644
> > --- a/xen/arch/arm/tee/Makefile
> > +++ b/xen/arch/arm/tee/Makefile
> > @@ -2,5 +2,6 @@ obj-$(CONFIG_FFA) += ffa.o
> > obj-$(CONFIG_FFA) += ffa_shm.o
> > obj-$(CONFIG_FFA) += ffa_partinfo.o
> > obj-$(CONFIG_FFA) += ffa_rxtx.o
> > +obj-$(CONFIG_FFA) += ffa_notif.o
> > obj-y += tee.o
> > obj-$(CONFIG_OPTEE) += optee.o
> > diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> > index 5209612963e1..912e905a27bd 100644
> > --- a/xen/arch/arm/tee/ffa.c
> > +++ b/xen/arch/arm/tee/ffa.c
> > @@ -39,6 +39,9 @@
> >  *   - at most 32 shared memory regions per guest
> >  * o FFA_MSG_SEND_DIRECT_REQ:
> >  *   - only supported from a VM to an SP
> > + * o FFA_NOTIFICATION_*:
> > + *   - only supports global notifications, that is, per vCPU notifications
> > + * are not supported
> >  *
> >  * There are some large locked sections with ffa_tx_buffer_lock and
> >  * ffa_rx_buffer_lock. Especially the ffa_tx_buffer_lock spinlock used
> > @@ -194,6 +197,8 @@ out:
> >
> > static void handle_features(struct cpu_user_regs *regs)
> > {
> > +struct domain *d = current->domain;
> > +struct ffa_ctx *ctx = d->arch.tee;
> > uint32_t a1 = get_user_reg(regs, 1);
> > unsigned int n;
> >
> > @@ -240,6 +245,30 @@ static void handle_features(struct cpu_user_regs *regs)
> > BUILD_BUG_ON(PAGE_SIZE != FFA_PAGE_SIZE);
> > ffa_set_regs_success(regs, 0, 0);
> > break;
> > +case FFA_FEATURE_NOTIF_PEND_INTR:
> > +if ( ctx->notif.enabled )
> > +ffa_set_regs_success(regs, GUEST_FFA_NOTIF_PEND_INTR_ID, 0);
> > +else
> > +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
> > +break;
> > +case FFA_FEATURE_SCHEDULE_RECV_INTR:
> > +if ( ctx->notif.enabled )
> > +ffa_set_regs_success(regs, GUEST_FFA_SCHEDULE_RECV_INTR_ID, 0);
> > +else
> > +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
> > +break;
> > +
> > +case FFA_NOTIFICATION_BIND:
> > +case FFA_NOTIFICATION_UNBIND:
> > +case FFA_NOTIFICATION_GET:
> > +case FFA_NOTIFICATION_SET:
> > +case FFA_NOTIFICATION_INFO_GET_32:
> > +case FFA_NOTIFICATION_INFO_GET_64:
> > +if ( ctx->notif.enabled )
> > +ffa_set_regs_success(regs, 0, 0);
> > +else
> > +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
> > +break;
> > default:
> > ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
> > break;
> > @@ -305,6 +334,22 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
> >   

Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-26 Thread Bertrand Marquis
Hi Jens,

> On 26 Apr 2024, at 10:47, Jens Wiklander  wrote:
> 
> Add support for FF-A notifications, currently limited to an SP (Secure
> Partition) sending an asynchronous notification to a guest.
> 
> Guests and Xen itself are made aware of pending notifications with an
> interrupt. The interrupt handler retrieves the notifications using the
> FF-A ABI and deliver them to their destinations.
> 
> Update ffa_partinfo_domain_init() to return error code like
> ffa_notif_domain_init().
> 
> Signed-off-by: Jens Wiklander 
> ---
> v2->v3:
> - Add a GUEST_ prefix and move FFA_NOTIF_PEND_INTR_ID and
>  FFA_SCHEDULE_RECV_INTR_ID to public/arch-arm.h
> - Register the Xen SRI handler on each CPU using on_selected_cpus() and
>  setup_irq()
> - Check that the SGI ID retrieved with FFA_FEATURE_SCHEDULE_RECV_INTR
>  doesn't conflict with static SGI handlers
> 
> v1->v2:
> - Addressing review comments
> - Change ffa_handle_notification_{bind,unbind,set}() to take struct
>  cpu_user_regs *regs as argument.
> - Update ffa_partinfo_domain_init() and ffa_notif_domain_init() to return
>  an error code.
> - Fixing a bug in handle_features() for FFA_FEATURE_SCHEDULE_RECV_INTR.
> ---
> xen/arch/arm/irq.c  |   2 +-
> xen/arch/arm/tee/Makefile   |   1 +
> xen/arch/arm/tee/ffa.c  |  55 -
> xen/arch/arm/tee/ffa_notif.c| 378 
> xen/arch/arm/tee/ffa_partinfo.c |   9 +-
> xen/arch/arm/tee/ffa_private.h  |  56 -
> xen/include/public/arch-arm.h   |  14 ++
> 7 files changed, 507 insertions(+), 8 deletions(-)
> create mode 100644 xen/arch/arm/tee/ffa_notif.c
> 
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index d7306aa64d50..5224898265a5 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -155,7 +155,7 @@ void __init init_IRQ(void)
> {
> /* SGIs are always edge-triggered */
> if ( irq < NR_GIC_SGI )
> -local_irqs_type[irq] = DT_IRQ_TYPE_EDGE_RISING;
> +local_irqs_type[irq] = IRQ_TYPE_EDGE_RISING;
> else
> local_irqs_type[irq] = IRQ_TYPE_INVALID;
> }
> diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
> index f0112a2f922d..7c0f46f7f446 100644
> --- a/xen/arch/arm/tee/Makefile
> +++ b/xen/arch/arm/tee/Makefile
> @@ -2,5 +2,6 @@ obj-$(CONFIG_FFA) += ffa.o
> obj-$(CONFIG_FFA) += ffa_shm.o
> obj-$(CONFIG_FFA) += ffa_partinfo.o
> obj-$(CONFIG_FFA) += ffa_rxtx.o
> +obj-$(CONFIG_FFA) += ffa_notif.o
> obj-y += tee.o
> obj-$(CONFIG_OPTEE) += optee.o
> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> index 5209612963e1..912e905a27bd 100644
> --- a/xen/arch/arm/tee/ffa.c
> +++ b/xen/arch/arm/tee/ffa.c
> @@ -39,6 +39,9 @@
>  *   - at most 32 shared memory regions per guest
>  * o FFA_MSG_SEND_DIRECT_REQ:
>  *   - only supported from a VM to an SP
> + * o FFA_NOTIFICATION_*:
> + *   - only supports global notifications, that is, per vCPU notifications
> + * are not supported
>  *
>  * There are some large locked sections with ffa_tx_buffer_lock and
>  * ffa_rx_buffer_lock. Especially the ffa_tx_buffer_lock spinlock used
> @@ -194,6 +197,8 @@ out:
> 
> static void handle_features(struct cpu_user_regs *regs)
> {
> +struct domain *d = current->domain;
> +struct ffa_ctx *ctx = d->arch.tee;
> uint32_t a1 = get_user_reg(regs, 1);
> unsigned int n;
> 
> @@ -240,6 +245,30 @@ static void handle_features(struct cpu_user_regs *regs)
> BUILD_BUG_ON(PAGE_SIZE != FFA_PAGE_SIZE);
> ffa_set_regs_success(regs, 0, 0);
> break;
> +case FFA_FEATURE_NOTIF_PEND_INTR:
> +if ( ctx->notif.enabled )
> +ffa_set_regs_success(regs, GUEST_FFA_NOTIF_PEND_INTR_ID, 0);
> +else
> +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
> +break;
> +case FFA_FEATURE_SCHEDULE_RECV_INTR:
> +if ( ctx->notif.enabled )
> +ffa_set_regs_success(regs, GUEST_FFA_SCHEDULE_RECV_INTR_ID, 0);
> +else
> +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
> +break;
> +
> +case FFA_NOTIFICATION_BIND:
> +case FFA_NOTIFICATION_UNBIND:
> +case FFA_NOTIFICATION_GET:
> +case FFA_NOTIFICATION_SET:
> +case FFA_NOTIFICATION_INFO_GET_32:
> +case FFA_NOTIFICATION_INFO_GET_64:
> +if ( ctx->notif.enabled )
> +ffa_set_regs_success(regs, 0, 0);
> +else
> +ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
> +break;
> default:
> ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
> break;
> @@ -305,6 +334,22 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
>  get_user_reg(regs, 1)),
>get_user_reg(regs, 3));
> break;
> +case FFA_NOTIFICATION_BIND:
> +e = ffa_handle_notification_bind(regs);
> +break;
> +case FFA_NOTIFICATION_UNBIND:
> +e = 

[XEN PATCH v3 5/5] xen/arm: ffa: support notification

2024-04-26 Thread Jens Wiklander
Add support for FF-A notifications, currently limited to an SP (Secure
Partition) sending an asynchronous notification to a guest.

Guests and Xen itself are made aware of pending notifications with an
interrupt. The interrupt handler retrieves the notifications using the
FF-A ABI and deliver them to their destinations.

Update ffa_partinfo_domain_init() to return error code like
ffa_notif_domain_init().

Signed-off-by: Jens Wiklander 
---
v2->v3:
- Add a GUEST_ prefix and move FFA_NOTIF_PEND_INTR_ID and
  FFA_SCHEDULE_RECV_INTR_ID to public/arch-arm.h
- Register the Xen SRI handler on each CPU using on_selected_cpus() and
  setup_irq()
- Check that the SGI ID retrieved with FFA_FEATURE_SCHEDULE_RECV_INTR
  doesn't conflict with static SGI handlers

v1->v2:
- Addressing review comments
- Change ffa_handle_notification_{bind,unbind,set}() to take struct
  cpu_user_regs *regs as argument.
- Update ffa_partinfo_domain_init() and ffa_notif_domain_init() to return
  an error code.
- Fixing a bug in handle_features() for FFA_FEATURE_SCHEDULE_RECV_INTR.
---
 xen/arch/arm/irq.c  |   2 +-
 xen/arch/arm/tee/Makefile   |   1 +
 xen/arch/arm/tee/ffa.c  |  55 -
 xen/arch/arm/tee/ffa_notif.c| 378 
 xen/arch/arm/tee/ffa_partinfo.c |   9 +-
 xen/arch/arm/tee/ffa_private.h  |  56 -
 xen/include/public/arch-arm.h   |  14 ++
 7 files changed, 507 insertions(+), 8 deletions(-)
 create mode 100644 xen/arch/arm/tee/ffa_notif.c

diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index d7306aa64d50..5224898265a5 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -155,7 +155,7 @@ void __init init_IRQ(void)
 {
 /* SGIs are always edge-triggered */
 if ( irq < NR_GIC_SGI )
-local_irqs_type[irq] = DT_IRQ_TYPE_EDGE_RISING;
+local_irqs_type[irq] = IRQ_TYPE_EDGE_RISING;
 else
 local_irqs_type[irq] = IRQ_TYPE_INVALID;
 }
diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
index f0112a2f922d..7c0f46f7f446 100644
--- a/xen/arch/arm/tee/Makefile
+++ b/xen/arch/arm/tee/Makefile
@@ -2,5 +2,6 @@ obj-$(CONFIG_FFA) += ffa.o
 obj-$(CONFIG_FFA) += ffa_shm.o
 obj-$(CONFIG_FFA) += ffa_partinfo.o
 obj-$(CONFIG_FFA) += ffa_rxtx.o
+obj-$(CONFIG_FFA) += ffa_notif.o
 obj-y += tee.o
 obj-$(CONFIG_OPTEE) += optee.o
diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 5209612963e1..912e905a27bd 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -39,6 +39,9 @@
  *   - at most 32 shared memory regions per guest
  * o FFA_MSG_SEND_DIRECT_REQ:
  *   - only supported from a VM to an SP
+ * o FFA_NOTIFICATION_*:
+ *   - only supports global notifications, that is, per vCPU notifications
+ * are not supported
  *
  * There are some large locked sections with ffa_tx_buffer_lock and
  * ffa_rx_buffer_lock. Especially the ffa_tx_buffer_lock spinlock used
@@ -194,6 +197,8 @@ out:
 
 static void handle_features(struct cpu_user_regs *regs)
 {
+struct domain *d = current->domain;
+struct ffa_ctx *ctx = d->arch.tee;
 uint32_t a1 = get_user_reg(regs, 1);
 unsigned int n;
 
@@ -240,6 +245,30 @@ static void handle_features(struct cpu_user_regs *regs)
 BUILD_BUG_ON(PAGE_SIZE != FFA_PAGE_SIZE);
 ffa_set_regs_success(regs, 0, 0);
 break;
+case FFA_FEATURE_NOTIF_PEND_INTR:
+if ( ctx->notif.enabled )
+ffa_set_regs_success(regs, GUEST_FFA_NOTIF_PEND_INTR_ID, 0);
+else
+ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
+break;
+case FFA_FEATURE_SCHEDULE_RECV_INTR:
+if ( ctx->notif.enabled )
+ffa_set_regs_success(regs, GUEST_FFA_SCHEDULE_RECV_INTR_ID, 0);
+else
+ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
+break;
+
+case FFA_NOTIFICATION_BIND:
+case FFA_NOTIFICATION_UNBIND:
+case FFA_NOTIFICATION_GET:
+case FFA_NOTIFICATION_SET:
+case FFA_NOTIFICATION_INFO_GET_32:
+case FFA_NOTIFICATION_INFO_GET_64:
+if ( ctx->notif.enabled )
+ffa_set_regs_success(regs, 0, 0);
+else
+ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
+break;
 default:
 ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
 break;
@@ -305,6 +334,22 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
  get_user_reg(regs, 1)),
get_user_reg(regs, 3));
 break;
+case FFA_NOTIFICATION_BIND:
+e = ffa_handle_notification_bind(regs);
+break;
+case FFA_NOTIFICATION_UNBIND:
+e = ffa_handle_notification_unbind(regs);
+break;
+case FFA_NOTIFICATION_INFO_GET_32:
+case FFA_NOTIFICATION_INFO_GET_64:
+ffa_handle_notification_info_get(regs);
+return true;
+case FFA_NOTIFICATION_GET:
+ffa_handle_notification_get(regs);
+return