wangchdo commented on code in PR #17043:
URL: https://github.com/apache/nuttx/pull/17043#discussion_r2370741981
##########
sched/sched/sched_suspendscheduler.c:
##########
@@ -68,6 +68,12 @@ void nxsched_suspend_scheduler(FAR struct tcb_s *tcb)
return;
}
+#ifdef CONFIG_TASK_STACK_OVERFLOW_CHECK
+ /* Check for stack overflow */
+
+ DEBUGASSERT(!up_check_tcbstack_overflow(tcb));
Review Comment:
Hi @xiaoxiang781216
1. **I still cant't find the point to update the api of `up_check_tcbstack
`, `up_check_tcbstack `has it's own meaning, and are used many placed in the
kernel code....
[PR17056](https://github.com/apache/nuttx/pull/17056) will make kernel
code not clear and needs more testing to verify the change is OK....**
2. But i do believe my patch introduce duplicated check in implementing
`up_check_tcbstack_overflow`, I amended it to implement
`up_check_tcbstack_overflow` with the same low level checking as
`up_check_tcbstack`, I think this is more clear and has no duplicated code...
3. Besides, I don't think
[PR17056](https://github.com/apache/nuttx/pull/17056) has more feature
implemented, and in fact it doesn't consider the case when
CONFIG_SCHED_SUSPENDSCHEDULER is not enabled, which my patch considered:
```
#ifdef CONFIG_SCHED_SUSPENDSCHEDULER
void nxsched_suspend_scheduler(FAR struct tcb_s *tcb);
#else
# ifdef CONFIG_TASK_STACK_OVERFLOW_CHECK
# define nxsched_suspend_scheduler(tcb) \
do \
{ \
DEBUGASSERT(!up_check_tcbstack_overflow(tcb));\
} \
while (0)
# else
# define nxsched_suspend_scheduler(tcb)
# endif
#endif
```
4. you mentioned `compare sp directly` which i doesn't understand... is
there any need to `compare sp diretly` for stack overflow check?
5. also, I think it is easy to add more arch support with my PR, **just
implement `up_check_tcbstack_overflow `in arch will be OK**
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]