Re: all platforms, main(): call clockqueue_init() just before sched_init_cpu()

2023-06-15 Thread Claudio Jeker
On Tue, Jun 13, 2023 at 02:19:36PM +0200, Mark Kettenis wrote:
> > Date: Mon, 12 Jun 2023 19:09:59 -0500
> > From: Scott Cheloha 
> > 
> > We need to initialize the per-CPU clockintr_queue struct before we can
> > call clockintr_establish() from sched_init_cpu().
> > 
> > Initialization is done with a call to clockqueue_init().  Currently we
> > call it during clockintr_cpu_init(), i.e. each CPU initializes its own
> > clockintr_queue struct.
> > 
> > This patch moves the clockqueue_init() call out into main() and out
> > into the MD code, just before sched_init_cpu().  So, now the primary
> > CPU initializes the clockintr_queue struct on behalf of the secondary
> > CPUs.
> > 
> > No behavior change.
> > 
> > With this in place, we can start breaking pieces off of the
> > hardclock() and statclock() in the next patch.
> > 
> > ok?
> 
> Maybe we should consider having an MI "initialize data structures for
> this CPU" function at some point.  But having thise next to
> sched_init_cpu() is fine for now.
> 
> ok kettenis@

It would be nice to have a MI way but right now this needs to go in so we
can make progress with hardclock() and statclock().
I have this diff (plus more) running on my system for a while now. No
problems noticed.

OK claudio@
 
> > Index: kern/init_main.c
> > ===
> > RCS file: /cvs/src/sys/kern/init_main.c,v
> > retrieving revision 1.320
> > diff -u -p -r1.320 init_main.c
> > --- kern/init_main.c1 Jan 2023 07:00:51 -   1.320
> > +++ kern/init_main.c12 Jun 2023 23:55:43 -
> > @@ -47,6 +47,7 @@
> >  #include 
> >  #include 
> >  #include 
> > +#include 
> >  #include 
> >  #include 
> >  #include 
> > @@ -313,6 +314,7 @@ main(void *framep)
> > /* Initialize run queues */
> > sched_init_runqueues();
> > sleep_queue_init();
> > +   clockqueue_init(()->ci_queue);
> > sched_init_cpu(curcpu());
> > p->p_cpu->ci_randseed = (arc4random() & 0x7fff) + 1;
> >  
> > Index: kern/kern_clockintr.c
> > ===
> > RCS file: /cvs/src/sys/kern/kern_clockintr.c,v
> > retrieving revision 1.21
> > diff -u -p -r1.21 kern_clockintr.c
> > --- kern/kern_clockintr.c   23 Apr 2023 00:08:36 -  1.21
> > +++ kern/kern_clockintr.c   12 Jun 2023 23:55:43 -
> > @@ -66,7 +66,6 @@ void clockintr_schedule(struct clockintr
> >  void clockintr_schedule_locked(struct clockintr *, uint64_t);
> >  void clockintr_statclock(struct clockintr *, void *);
> >  void clockintr_statvar_init(int, uint32_t *, uint32_t *, uint32_t *);
> > -void clockqueue_init(struct clockintr_queue *);
> >  uint64_t clockqueue_next(const struct clockintr_queue *);
> >  void clockqueue_reset_intrclock(struct clockintr_queue *);
> >  uint64_t nsec_advance(uint64_t *, uint64_t, uint64_t);
> > @@ -114,7 +113,6 @@ clockintr_cpu_init(const struct intrcloc
> >  
> > KASSERT(ISSET(clockintr_flags, CL_INIT));
> >  
> > -   clockqueue_init(cq);
> > if (ic != NULL && !ISSET(cq->cq_flags, CQ_INTRCLOCK)) {
> > cq->cq_intrclock = *ic;
> > SET(cq->cq_flags, CQ_INTRCLOCK);
> > Index: sys/clockintr.h
> > ===
> > RCS file: /cvs/src/sys/sys/clockintr.h,v
> > retrieving revision 1.7
> > diff -u -p -r1.7 clockintr.h
> > --- sys/clockintr.h 20 Apr 2023 14:51:28 -  1.7
> > +++ sys/clockintr.h 12 Jun 2023 23:55:43 -
> > @@ -129,6 +129,7 @@ void clockintr_trigger(void);
> >   * Kernel API
> >   */
> >  
> > +void clockqueue_init(struct clockintr_queue *);
> >  int sysctl_clockintr(int *, u_int, void *, size_t *, void *, size_t);
> >  
> >  #endif /* _KERNEL */
> > Index: arch/alpha/alpha/cpu.c
> > ===
> > RCS file: /cvs/src/sys/arch/alpha/alpha/cpu.c,v
> > retrieving revision 1.46
> > diff -u -p -r1.46 cpu.c
> > --- arch/alpha/alpha/cpu.c  10 Dec 2022 15:02:29 -  1.46
> > +++ arch/alpha/alpha/cpu.c  12 Jun 2023 23:55:43 -
> > @@ -597,6 +597,7 @@ cpu_hatch(struct cpu_info *ci)
> > ALPHA_TBIA();
> > alpha_pal_imb();
> >  
> > +   clockqueue_init(>ci_queue);
> > KERNEL_LOCK();
> > sched_init_cpu(ci);
> > nanouptime(>ci_schedstate.spc_runtime);
> > Index: arch/amd64/amd64/cpu.c
> > ===
> > RCS file: /cvs/src/sys/arch/amd64/amd64/cpu.c,v
> > retrieving revision 1.168
> > diff -u -p -r1.168 cpu.c
> > --- arch/amd64/amd64/cpu.c  24 Apr 2023 09:04:03 -  1.168
> > +++ arch/amd64/amd64/cpu.c  12 Jun 2023 23:55:43 -
> > @@ -664,6 +664,7 @@ cpu_attach(struct device *parent, struct
> >  #if defined(MULTIPROCESSOR)
> > cpu_intr_init(ci);
> > cpu_start_secondary(ci);
> > +   clockqueue_init(>ci_queue);
> > sched_init_cpu(ci);
> > ncpus++;
> > if (ci->ci_flags & 

Re: all platforms, main(): call clockqueue_init() just before sched_init_cpu()

2023-06-13 Thread Mark Kettenis
> Date: Mon, 12 Jun 2023 19:09:59 -0500
> From: Scott Cheloha 
> 
> We need to initialize the per-CPU clockintr_queue struct before we can
> call clockintr_establish() from sched_init_cpu().
> 
> Initialization is done with a call to clockqueue_init().  Currently we
> call it during clockintr_cpu_init(), i.e. each CPU initializes its own
> clockintr_queue struct.
> 
> This patch moves the clockqueue_init() call out into main() and out
> into the MD code, just before sched_init_cpu().  So, now the primary
> CPU initializes the clockintr_queue struct on behalf of the secondary
> CPUs.
> 
> No behavior change.
> 
> With this in place, we can start breaking pieces off of the
> hardclock() and statclock() in the next patch.
> 
> ok?

Maybe we should consider having an MI "initialize data structures for
this CPU" function at some point.  But having thise next to
sched_init_cpu() is fine for now.

ok kettenis@

> Index: kern/init_main.c
> ===
> RCS file: /cvs/src/sys/kern/init_main.c,v
> retrieving revision 1.320
> diff -u -p -r1.320 init_main.c
> --- kern/init_main.c  1 Jan 2023 07:00:51 -   1.320
> +++ kern/init_main.c  12 Jun 2023 23:55:43 -
> @@ -47,6 +47,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -313,6 +314,7 @@ main(void *framep)
>   /* Initialize run queues */
>   sched_init_runqueues();
>   sleep_queue_init();
> + clockqueue_init(()->ci_queue);
>   sched_init_cpu(curcpu());
>   p->p_cpu->ci_randseed = (arc4random() & 0x7fff) + 1;
>  
> Index: kern/kern_clockintr.c
> ===
> RCS file: /cvs/src/sys/kern/kern_clockintr.c,v
> retrieving revision 1.21
> diff -u -p -r1.21 kern_clockintr.c
> --- kern/kern_clockintr.c 23 Apr 2023 00:08:36 -  1.21
> +++ kern/kern_clockintr.c 12 Jun 2023 23:55:43 -
> @@ -66,7 +66,6 @@ void clockintr_schedule(struct clockintr
>  void clockintr_schedule_locked(struct clockintr *, uint64_t);
>  void clockintr_statclock(struct clockintr *, void *);
>  void clockintr_statvar_init(int, uint32_t *, uint32_t *, uint32_t *);
> -void clockqueue_init(struct clockintr_queue *);
>  uint64_t clockqueue_next(const struct clockintr_queue *);
>  void clockqueue_reset_intrclock(struct clockintr_queue *);
>  uint64_t nsec_advance(uint64_t *, uint64_t, uint64_t);
> @@ -114,7 +113,6 @@ clockintr_cpu_init(const struct intrcloc
>  
>   KASSERT(ISSET(clockintr_flags, CL_INIT));
>  
> - clockqueue_init(cq);
>   if (ic != NULL && !ISSET(cq->cq_flags, CQ_INTRCLOCK)) {
>   cq->cq_intrclock = *ic;
>   SET(cq->cq_flags, CQ_INTRCLOCK);
> Index: sys/clockintr.h
> ===
> RCS file: /cvs/src/sys/sys/clockintr.h,v
> retrieving revision 1.7
> diff -u -p -r1.7 clockintr.h
> --- sys/clockintr.h   20 Apr 2023 14:51:28 -  1.7
> +++ sys/clockintr.h   12 Jun 2023 23:55:43 -
> @@ -129,6 +129,7 @@ void clockintr_trigger(void);
>   * Kernel API
>   */
>  
> +void clockqueue_init(struct clockintr_queue *);
>  int sysctl_clockintr(int *, u_int, void *, size_t *, void *, size_t);
>  
>  #endif /* _KERNEL */
> Index: arch/alpha/alpha/cpu.c
> ===
> RCS file: /cvs/src/sys/arch/alpha/alpha/cpu.c,v
> retrieving revision 1.46
> diff -u -p -r1.46 cpu.c
> --- arch/alpha/alpha/cpu.c10 Dec 2022 15:02:29 -  1.46
> +++ arch/alpha/alpha/cpu.c12 Jun 2023 23:55:43 -
> @@ -597,6 +597,7 @@ cpu_hatch(struct cpu_info *ci)
>   ALPHA_TBIA();
>   alpha_pal_imb();
>  
> + clockqueue_init(>ci_queue);
>   KERNEL_LOCK();
>   sched_init_cpu(ci);
>   nanouptime(>ci_schedstate.spc_runtime);
> Index: arch/amd64/amd64/cpu.c
> ===
> RCS file: /cvs/src/sys/arch/amd64/amd64/cpu.c,v
> retrieving revision 1.168
> diff -u -p -r1.168 cpu.c
> --- arch/amd64/amd64/cpu.c24 Apr 2023 09:04:03 -  1.168
> +++ arch/amd64/amd64/cpu.c12 Jun 2023 23:55:43 -
> @@ -664,6 +664,7 @@ cpu_attach(struct device *parent, struct
>  #if defined(MULTIPROCESSOR)
>   cpu_intr_init(ci);
>   cpu_start_secondary(ci);
> + clockqueue_init(>ci_queue);
>   sched_init_cpu(ci);
>   ncpus++;
>   if (ci->ci_flags & CPUF_PRESENT) {
> Index: arch/arm/arm/cpu.c
> ===
> RCS file: /cvs/src/sys/arch/arm/arm/cpu.c,v
> retrieving revision 1.57
> diff -u -p -r1.57 cpu.c
> --- arch/arm/arm/cpu.c12 Mar 2022 14:40:41 -  1.57
> +++ arch/arm/arm/cpu.c12 Jun 2023 23:55:43 -
> @@ -391,6 +391,7 @@ cpu_attach(struct device *parent, struct
>   "cpu-release-addr", 0);
>   }
>  
> +