btashton commented on issue #2663:
URL: 
https://github.com/apache/incubator-nuttx/issues/2663#issuecomment-757423978


   
   
   > Tiva ADC driver already uses it. For example, in my application I want to 
short a sensor to ground, measure a different sensor to ensure the two sensors 
aren't shorted together, and then release the first sensor before the external 
analog plausibility logic detects it. If a bunch of CAN messages happened to 
come in and block the work queue in the middle of that, I imagine there could 
be problems.
   
   I don't see how this will be an issue, your driver should only have work 
scheduled once on the work queue.  The logic you have in that thread is very 
small, small enough that most of the other CAN drivers just handle calling 
`can_receive` right in the interrupt handler.  Where it is bad is if you have a 
worker callback that waits on semaphore or polls on a ready/busy register.  I 
did not see either of these cases here.  So if you get a bunch of ADC work CAN 
work come in at the same time they should be equally scheduled and you may 
actually pay less in context switching.
   
   > But regarding the kernel thread... isn't argv copied to the new thread's 
TCB? It seems like it should be safe to use stack memory to pass the arguments 
in that case. I agree I could move the kernel thread without much trouble but I 
think there is still a bug here nonetheless.
   
   You are right that is handled by `nxtask_setup_stackargs` I was thinking 
about something else.
   
   
   I am not disagreeing yet that there might be a bug, just pointing out how 
what you are doing is different than how most other drivers are written and 
that I don't really see the benefit (I see a drawback in resource usage).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to