Looks good Amos. :)

On Sat, 2006-08-19 at 02:47 -0400, Amos Waterland wrote:
> 
> diff -r 539a1e666982 xen/arch/powerpc/powerpc64/exceptions.S
> --- a/xen/arch/powerpc/powerpc64/exceptions.S   Fri Aug 18 14:07:50
> 2006 -0400
> +++ b/xen/arch/powerpc/powerpc64/exceptions.S   Fri Aug 18 21:06:13
> 2006 -0400
> @@ -514,6 +514,42 @@ _GLOBAL(sleep)
>      mtmsrd r3
>      blr
>  
> +/* The primary processor issues a firmware call to spin us up at this
> + * address, passing our CPU number in r3.  We only need a function
> + * entry point instead of a descriptor since this is never called
> from
> + * C code.
> + */    
>      .globl spin_start
>  spin_start:
...

I know there was some discussion about the need for sync/isync/nop in
this code. Could you document why these are needed? I would settle for
"XXX I'm not sure why these are needed but we crash if we don't have
them.".

> @@ -76,6 +77,13 @@ extern void idle_loop(void);
>  
>  /* move us to a header file */
>  extern void initialize_keytable(void);
> +
> +static void init_parea(int cpuid);
> +
> +volatile struct processor_area * volatile global_cpu_table[NR_CPUS];
> +
> +static int kick_secondary_cpus(void);
> +int secondary_cpu_init(int cpuid, unsigned long);
>  
>  int is_kernel_text(unsigned long addr)
>  {

Could these new prototypes be avoiding by reordering the functions in
this file and/or putting them into a header?

> +static void init_parea(int cpuid)
> +{
> +    /* Be careful not to shadow the global variable.  */
> +    volatile struct processor_area *pa;
> +    void *stack;
> +
> +    pa = xmalloc(struct processor_area);
> +    if (pa == NULL)
> +        panic("failed to allocate parea\n");
> +
> +    stack = alloc_xenheap_pages(STACK_ORDER);
> +    if (stack == NULL)
> +        panic("failed to allocate stack\n");

Please change these panics to be:
        panic("%s: failed to allocate parea for cpu %d\n", __func__, cpuid);
and
        panic("%s: failed to allocate stack (order %d) for cpu %d\n", __func__, 
STACK_ORDER, cpuid);

-- 
Hollis Blanchard
IBM Linux Technology Center


_______________________________________________
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel

Reply via email to