xiaoxiang781216 commented on PR #6631: URL: https://github.com/apache/incubator-nuttx/pull/6631#issuecomment-1186703810
> > I wasn't aware that this check is also present there. > > Shall I drop this PR? > > The check in `irq_dispatch()` only calls `kmm_checkforcorruption()` on IRQ exits to check heaps for TCBs **marked with TCB_FLAG_MEM_CHECK** (and there are no mentions of the flag whatsoever in the master codebase). It also build-depends on `CONFIG_DEBUG_MM`, as @acassis suggests for this checker as well. > We can refine how to trigger the check. But irq_dispatch is the best place to get the reliable result since if kmm_checkforcorruption report the error, we can ascertain that the interrupted thread corrupt the memory. On the another hand, even LP work detect the memory corruption, it's still hard to identify which thread corrupt the memory. > I see the PR as still useful because it also checks stacks, and it doesn't require setting a tcb flag. Yes, `nsh> ps` can show stack usage in interactive shell, though it requires STACK_COLORATION and provides virtually a very slow sample rate, as does `stackmonitor`. However, it might conflict / do double work with `irq_dispatch`. The good thing is we're decoupling memory checks from IDLE thread which used to cause problems in #5266, and that issue/PR mentioned something about moving checks to LPWORK thread (half a year ago). > It's better to enable STACK_CANARIES, ARMV8M_STACKCHECK_HARDWARE or ARMV[7|8]M_STACKCHECK, since they can report the stack overflow immediately. > @fjpanag How many processes/threads (and pthreads) were running on your STM32F427 board? Did you enable DEBUG_MM as well? Can you somehow verify that there won't be deadlock problems with semaphores of mm? Was there any external (FMC SDRAM) memory attached to system heaps? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@nuttx.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org