MTres19 commented on issue #2663: URL: https://github.com/apache/incubator-nuttx/issues/2663#issuecomment-757418491
Hi @btashton yep, I think it's the semaphore for the list of open file descriptors in the task control that's being initialized. `struct tcb_s.group->tg_filelist.fl_sem`. But I'm not familiar with the code so it's hard to see the big picture... the relevant line in my driver is [here](https://github.com/MTres19/incubator-nuttx/blob/b63c54bb52f4158c6495723359032668f0947080/arch/arm/src/tiva/common/tiva_can.c#L409) if you're curious. I assume this problem hasn't come up before because most drivers wouldn't create a kernel thread in a function that's called from the application. This "fixes" the problem for me but it's just a band-aid I know. ~~~ diff --git a/sched/task/task_init.c b/sched/task/task_init.c index b187b74ebe..3e3fd61a45 100644 --- a/sched/task/task_init.c +++ b/sched/task/task_init.c @@ -101,11 +101,14 @@ int nxtask_init(FAR struct task_tcb_s *tcb, const char *name, int priority, } /* Associate file descriptors with the new task */ - - ret = group_setuptaskfiles(tcb); - if (ret < 0) + + if (!(ttype & TCB_FLAG_TTYPE_KERNEL)) { - goto errout_with_group; + ret = group_setuptaskfiles(tcb); + if (ret < 0) + { + goto errout_with_group; + } } if (stack) ~~~ Based on [this comment](https://github.com/apache/incubator-nuttx/blob/e772be8c8a019d4c9123bff9bc6037b4da15d946/fs/inode/fs_files.c#L155-L160). Regarding a dedicated thread vs the work queue, I actually do use the low-priority work queue to check periodically for bus errors when servicing the interrupt would be excessive. I read the admonition against using the work queue for things like this [on the wiki](https://cwiki.apache.org/confluence/display/NUTTX/Work+Queue+Deadlocks). ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
