jasonbu opened a new pull request, #13541:
URL: https://github.com/apache/nuttx/pull/13541

   ## Summary
   The current pm_idle is only support for not smp case, and we are now start 
facing the chip & board & project with both smp and power consume limit. These 
patchs will take smp support for PM.
   
   We do a spinlock_irqsave and cpuset bit clear to ensure there is no running 
cpu. then the pm_handle can do cross-core relative operations with locked.
   
   PM_IDLE_DOMAIN now only action as system domain, and only update when last 
core enter sleep, or first core leave sleep.
   If want to get notification from specific core should register callback by 
specific cpu domain.
   ```C
   # define PM_SMP_CPU_DOMAIN(cpu) (CONFIG_PM_NDOMAINS - CONFIG_SMP_NCPUS + 
(cpu))
   ```
   system domain will not deeper than cpudomain cur state.
   this behavior is realized by stay/relax when cpudomain state change.
   
   We exposed the lock/unlock behavior in pm_handler, so need manually 
unlock->WFI->lock in pm_handler, to make possible minium the cpus lock time. if 
not cross core relative required, operations can also do after unlock, and 
before WFI.
   
   As it always run in idle threads. Need to sched_lock, so **sched lock by 
tcb/core is required**.
   Before this feature ready will keep in draft status.
   
   ## Impact
   This method is optional, and will no impact if did not face the use case 
open COFIG_PM and CONFIG_SMP at the same time.
   
   When open CONFIG_PM and CONFIG_SMP at the same time, the pm_idle function 
will replaced with smp version, and require the chip/board to implement pm 
handler, to take care of the cpu domain & system domain state changed.
   
   ## Testing
   CI-test & qemu-v8a manually open CONFIG_PM & cortex-A7 SMP board.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to