wangchdo commented on PR #17075:
URL: https://github.com/apache/nuttx/pull/17075#issuecomment-3351166714

   > > hi @suoyuanG
   > > I think your [PR17060 ](https://github.com/apache/nuttx/pull/17060) just 
removed the arch sched interaction to the new added **nxched_switch_context()**
   > > Why again removed **nxched_switch_context()** and then add 
**nxsched_switch_critmon()** so quick? I think @xiaoxiang781216 just emphazied 
the high quality of the PR that merged into nuttx kernel
   > 
   > both nxched_switch_context and nxsched_switch_critmon still exist: 
https://github.com/apache/nuttx/pull/17075/files#diff-0ab8555d6d6bfca83c93ad5b2795dda91a5da94fe4bcc80cbe759479b5cf0a5aR50
   > 
   > > I don't think it is good to do this big change in scheduler too soon, 
how do you think about this? @xiaoxiang781216
   > 
   > this patchset continue the work in #17060, the main idea is letting arch 
call nxsched_switch_context instead 
nxsched_suspend_scheduler/nxsched_resume_scheduler since we found several arch 
doesn't call nxsched_suspend_scheduler/nxsched_resume_scheduler in pair.
   > 
   > this change also increase the performance when some monitor(sched note, 
critical monitor...) is enabled since two function merge into one.
   
   Hi @xiaoxiang781216
   
   Thanks for your explanation, I did find several arches do not call 
nxsched_suspend_scheduler/nxsched_resume_scheduler
   
   But i also found those are not places context switch happened, and these 
arches do call nxsched_suspend_scheduler/nxsched_resume_scheduler in pair where 
context siwtch happen. maybe we don't need to change these places.
   
   Anyway if you think this PR can improve performance, that would be good, but 
maybe more tests should be done on this. Thank you.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to