Hello All,

Sorry for possible offtopic, but I have a question related to Radeon 9250 card lockups. I am doing an experimental research project on graphics engine resource management based on the r200 driver. I have modified the drm implementation so that all the commands sent to the ring on behalf of a process are being queued in the kernel (xxx_RING macros redefinition), and when a user-level process emits state it marks it in the command buffer so that DRM side can distinguish it from other commands. Then, I have a kernel thread which takes commands from client queues and dispatches them to the GPU. When it detects a radeon_cp_cmdbuf coming from a different client than before, it emits the necessary state the corresponding client relies upon (just like in r200_dri.so does, but in kernel). I tried to optimize context switches by 'remembering' what has been sent last to the GPU for every hardware state 'atom' in the scheduler thread, and when it comes to context switch, sending only the atoms that actually differ. However, it resulted in quite frequent lockups (4 windows with an app drawing 10000 spinning triangles lock it up after ten seconds) compared to the version which emits full state every time which is quite more robust. Which 'atoms' are necessary to emit every time even if exactly the same command sequence was emitted for this atom by the previous client?

The lockups I am experiencing are real hardware lockups, because I debugged ring head tail position and it does not change. Is it possible to detect hardware lockup and reset hardware automatically, by the way? I've read that Longhorn display drivers for existing hardware are capable of something like that.

Once again, sorry for offtopic.

Thank you.
Mikhail Bautin

Reply via email to