> I must have explained things quite poorly in the article you said you > read. Having a live scheduler allows you to _not_ understand all the > complex interactions between blocking operations in your system because > the liveness means that eventually whatever thread you are waiting for > will proceed. A priority driven Real-time scheduler is not live so if > you do not understand all blocking relationships, your system may die.
I understand deadlocks and that when using lots of threads/mutexes this gets hard to analyse. I\'m not in favor of just using priority inheritance to get away with bad software design. > I have no idea whether you should be using RTLinux, but it is absolutely > correct tht Liux is not suitable for hard realtime use. Not right now and not without kernel patches. But it should be possible to get a worst case for scheduling latency with Linux, perhaps not a very good one right now. > I\'ve yet to see an example where it was both needed and effective. > Perhaps you can give me one. When implementing a FIFO that is read by a low priority thread and written by a higher priority thread (SCHED_FIFO) that is not allow to block when writing the FIFO for a short, bounded time. Then if access to the FIFO is controlled using a mutex without some means to prevent priority inversion, the high priority thread can block indefinately on the mutex. > And, yes, it does make the system slower even when not using it, for > several reasons - mentioned in the paper. As one example, all your > wait queues need to be atomically re-orderable. I know too little of the implementation to verify this, but if it is true, than that is a good argument against priority inheritance. --martijn Powered by ASHosting
