Re: [XenPPC] [PATCH] Enable SMP and IPIs

2006-10-25 Thread Jimi Xenidis


On Oct 24, 2006, at 9:15 PM, Amos Waterland wrote:


On Tue, Oct 24, 2006 at 05:54:47AM -0400, Jimi Xenidis wrote:

On Oct 24, 2006, at 12:22 AM, Amos Waterland wrote:

Note that the flurry of IPIs generated by domain creation seems to
be a waste of time.  Xen on x86 doesn't actually to do anything in
response to them, so I have made Xen on PPC properly deliver the
event check  but then do nothing as well.  Comments?


What do you think we should be doing in response to these event check
IPIs?


It seems to be used to force a preemption event on a CPU.
The particular use is probably to wake CPUs napping in the idle  
domain to "raise_softirq".
However, I would think that this could have been done with a "call  
function".



  I thought that there were hopes of improving interactivity of
single-cpu systems ...


Not sure what your point is here. I don't see how any IPI can do that.




I know this is RFC, and you know I'm a bug fan of BUG(), but
eventually I'd expect:
  BUG_ON(!action)
  if (!action)
return -ENOMEM


I think your comments here are predicated on the assumption that BUG
macros become noops when not compiling with debug.  That is not the  
case

in Xen, so I thought it was a waste of time to put in extra code.  Do
you think it might someday not be the case?


I personally consider BUG() to be more a developer tool and in  
general should not be considered a no-return, however the Xen  
maintainers do not agree with this, but I'd like to see it reflected  
in our code.


However, you raise an interesting issue where I would have thought  
that the caller of request_irq() would check and BUG/panic. Linux  
uses this as a standard service and we are using it as a  
compatibility function for the mpic.c driver that does _not_ check  
the return value.


-JX


___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel


Re: [XenPPC] [PATCH] Enable SMP and IPIs

2006-10-24 Thread Amos Waterland
On Tue, Oct 24, 2006 at 05:54:47AM -0400, Jimi Xenidis wrote:
> On Oct 24, 2006, at 12:22 AM, Amos Waterland wrote:
> > Note that the flurry of IPIs generated by domain creation seems to
> > be a waste of time.  Xen on x86 doesn't actually to do anything in
> > response to them, so I have made Xen on PPC properly deliver the
> > event check  but then do nothing as well.  Comments?

What do you think we should be doing in response to these event check
IPIs?  I thought that there were hopes of improving interactivity of
single-cpu systems ...

> I know this is RFC, and you know I'm a bug fan of BUG(), but  
> eventually I'd expect:
>   BUG_ON(!action)
>   if (!action)
>   return -ENOMEM

I think your comments here are predicated on the assumption that BUG
macros become noops when not compiling with debug.  That is not the case
in Xen, so I thought it was a waste of time to put in extra code.  Do
you think it might someday not be the case?


___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel


Re: [XenPPC] [PATCH] Enable SMP and IPIs

2006-10-24 Thread Jimi Xenidis


On Oct 24, 2006, at 9:29 AM, Michal Ostrowski wrote:


On Tue, 2006-10-24 at 00:22 -0400, Amos Waterland wrote:

This patch enables SMP and IPIs.  It works reliably on JS20 and JS21
blades.  It is not quite ready for submission, but I would greatly
appreciate any comments.

Note that the flurry of IPIs generated by domain creation seems to  
be a
waste of time.  Xen on x86 doesn't actually to do anything in  
response
to them, so I have made Xen on PPC properly deliver the event  
check but

then do nothing as well.  Comments?

The important thing this patch is missing is the ability to invoke
functions on a remote CPU, and I have left it out because I am not  
yet

happy with the convention, which is to grab a lock, put the target
address in a single global shared variable, invoke a memory  
barrier, and

send the IPI.  I am hoping to avoid a lock around that operation by
making a per-IPI-destination-CPU structure with lock, address, and  
ack

fields.  Comments?



The strategy I decided upon to use in rhype was to maintain an  
array of

function "records" (fn ptr + arg data), and a bit-map per-cpu
identifying which ones were active.  If I want to run something on
another CPU, I find and fill in and empty function record for the  
target
CPU and then set the bit in the bitmap.  The receiver CPU checks to  
see

if there is work to be done and if so, runs the specified function.
This mechanism makes the caller responsible for providing for managing
return values/ACKs.

I hope to use this to drive RCU activity as well, so it is not tied  
down

to an actual IPI vector occurring.  In fact, I have not yet has a need
to actually depend on an IPI from the mpic.


Ahh yes.. the hi frequency periodic timer on a hypervisor that does  
not interrupts. :-P

How would you take an IPI anyway?!
I guess you would use it to interrupt an LPAR?

-JX



___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel


Re: [XenPPC] [PATCH] Enable SMP and IPIs

2006-10-24 Thread Michal Ostrowski
On Tue, 2006-10-24 at 00:22 -0400, Amos Waterland wrote:
> This patch enables SMP and IPIs.  It works reliably on JS20 and JS21
> blades.  It is not quite ready for submission, but I would greatly
> appreciate any comments.
> 
> Note that the flurry of IPIs generated by domain creation seems to be a
> waste of time.  Xen on x86 doesn't actually to do anything in response
> to them, so I have made Xen on PPC properly deliver the event check but
> then do nothing as well.  Comments?
> 
> The important thing this patch is missing is the ability to invoke
> functions on a remote CPU, and I have left it out because I am not yet
> happy with the convention, which is to grab a lock, put the target
> address in a single global shared variable, invoke a memory barrier, and
> send the IPI.  I am hoping to avoid a lock around that operation by
> making a per-IPI-destination-CPU structure with lock, address, and ack
> fields.  Comments?
> 

The strategy I decided upon to use in rhype was to maintain an array of
function "records" (fn ptr + arg data), and a bit-map per-cpu
identifying which ones were active.  If I want to run something on
another CPU, I find and fill in and empty function record for the target
CPU and then set the bit in the bitmap.  The receiver CPU checks to see
if there is work to be done and if so, runs the specified function.
This mechanism makes the caller responsible for providing for managing
return values/ACKs.

I hope to use this to drive RCU activity as well, so it is not tied down
to an actual IPI vector occurring.  In fact, I have not yet has a need
to actually depend on an IPI from the mpic.


-- 
Michal Ostrowski <[EMAIL PROTECTED]>


___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel


Re: [XenPPC] [PATCH] Enable SMP and IPIs

2006-10-24 Thread Jimi Xenidis


On Oct 24, 2006, at 12:22 AM, Amos Waterland wrote:


This patch enables SMP and IPIs.  It works reliably on JS20 and JS21
blades.  It is not quite ready for submission, but I would greatly
appreciate any comments.

Note that the flurry of IPIs generated by domain creation seems to  
be a

waste of time.  Xen on x86 doesn't actually to do anything in response
to them, so I have made Xen on PPC properly deliver the event check  
but

then do nothing as well.  Comments?

The important thing this patch is missing is the ability to invoke
functions on a remote CPU, and I have left it out because I am not yet
happy with the convention, which is to grab a lock, put the target
address in a single global shared variable, invoke a memory  
barrier, and

send the IPI.  I am hoping to avoid a lock around that operation by
making a per-IPI-destination-CPU structure with lock, address, and ack
fields.  Comments?


It seems Linux-PPC does a similar thing, I'd be happy with borrowing  
that code:

  arch/powerpc/kernel/smp.c  214 int smp_call_function ()

I will always encourage, "coming up with something better" but lets  
nail it first.


[snip]


diff -r 3dfeb3e4a03f xen/arch/powerpc/mpic_init.c
--- a/xen/arch/powerpc/mpic_init.c  Fri Oct 13 11:00:32 2006 -0400
+++ b/xen/arch/powerpc/mpic_init.c  Mon Oct 23 15:32:55 2006 -0400
@@ -358,6 +358,43 @@ static struct hw_interrupt_type *share_m

 #endif

+static inline struct mpic * mpic_from_ipi(unsigned int ipi)
+{
+   return container_of(irq_desc[ipi].handler, struct mpic, hc_ipi);
+}
+
+static unsigned int mpic_startup_ipi(unsigned int irq)
+{
+printk("%s\n", __func__);
+struct mpic *pic = mpic_from_ipi(irq);
+pic->hc_ipi.enable(irq);
+return 0;
+}


at least in our code now, mpic pointer is static to this whole file  
so mpic_from_ipi() is unnecessary.  We don't have any multi-mpic  
machines yet and looking at upstream Linux, they have solved this  
problem another way.



+
+int request_irq(unsigned int irq,
+   irqreturn_t (*handler)(int, void *, struct cpu_user_regs *),
+   unsigned long irqflags, const char * devname, void *dev_id)
+{
+int retval;
+struct irqaction *action;
+
+printk("%s: %d: %p: %s: %p\n", __func__, irq, handler,  
devname, dev_id);

+
+action = xmalloc_bytes(sizeof(struct irqaction));


plan xmalloc() takes a type and does the sizeof() for you, so:
  action = xmalloc(struct irqaction);



+if (!action)
+   BUG();


I know this is RFC, and you know I'm a bug fan of BUG(), but  
eventually I'd expect:

  BUG_ON(!action)
  if (!action)
return -ENOMEM


+
+action->handler = (void (*)(int, void *, struct cpu_user_regs  
*))handler;
can you cast this with a local variable and comment that Xen's irq  
action is older than Linux's.



+action->name = devname;
+action->dev_id = dev_id;
+
+retval = setup_irq(irq, action);
+if (retval)
+   BUG();


As the BUG above:
  BUG_ON(retval);
  if (reval)
xfree(action);


+
+return retval;
+}
+
 struct hw_interrupt_type *xen_mpic_init(struct hw_interrupt_type  
*xen_irq)

 {
 unsigned int isu_size;
@@ -397,6 +434,11 @@ struct hw_interrupt_type *xen_mpic_init(
 hit = share_mpic(&mpic->hc_irq, xen_irq);

 printk("%s: success\n", __func__);
+
+mpic->hc_ipi.ack = xen_irq->ack;
+mpic->hc_ipi.startup = mpic_startup_ipi;
+mpic_request_ipis();
+
 return hit;
 }


[snip]

diff -r 3dfeb3e4a03f xen/arch/powerpc/smp.c
--- a/xen/arch/powerpc/smp.cFri Oct 13 11:00:32 2006 -0400
+++ b/xen/arch/powerpc/smp.cMon Oct 23 23:15:45 2006 -0400
@@ -22,6 +22,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 

 int smp_num_siblings = 1;
 int smp_num_cpus = 1;
@@ -46,11 +48,30 @@ void __flush_tlb_mask(cpumask_t mask, un
 unimplemented();
 }

+static void send_IPI_mask(cpumask_t mask, int vector)
+{
+unsigned long cpus;
+
+switch(vector) {
+case CALL_FUNCTION_VECTOR:
+case EVENT_CHECK_VECTOR:
+case INVALIDATE_TLB_VECTOR:


I'm pretty sure we should not be supporting the INVALIDATE_TLB_VECTOR  
vector and will probably just do a broadcast TLB instead, do I'd like  
to see this BUG().



+   break;
+default:
+   BUG();
+}
+
+BUG_ON(sizeof(mask.bits) != sizeof(unsigned long));
+cpus = mask.bits[0];
+
+mpic_send_ipi(vector, cpus);


I'd like to keep all knowledge of mpic in external.c and mpic_init.c,  
so let make a new function smp_send_ipi() in external.c and move the  
bounds checking there, sinec mpic_send_ipi() takes an unsigned int  
for cpus, you should make sure that the bounds check on that rather  
than unsigned long.



+}
+
 void smp_send_event_check_mask(cpumask_t mask)
 {
 cpu_clear(smp_processor_id(), mask);
 if (!cpus_empty(mask))
-unimplemented();
+send_IPI_mask(mask, EVENT_CHECK_VECTOR);
 }


@@ -78,3 +99,39 @@ int on_selected_cpus(
 unimplemented();
 return 0;
 }
+
+void smp_call_