That's a very good question. There are a couple of reasons for this. 1) Writing any kind of meta operation and custom rendering code on top of GL is a horrible idea and very prone to errors. If you don't restore all states after you're done, you may break the application. If the application sets a state you didn't take into account, it may break the HUD rendering. And there is a lot of functionality in GL that must be taken into account, while it's pretty simple with Gallium, which has tools for saving and restoring states.
2) I would have to add code paths for GL2, GL3 core, GLES1, and GLES2. I don't really want to worry about all those APIs or any future API. It might also be interesting to use the HUD with non-OpenGL state trackers, like st/xorg and st/xa. 3) Gallium has lower overhead than GL and the HUD should have as little impact on framerate as possible. I'm pretty sure everybody will benefit from this except Intel. I wholeheartedly wish the situation were different, but there is nothing I can do about that. Marek On Mon, Mar 25, 2013 at 5:47 PM, Alexander Monakov <amona...@gmail.com> wrote: > I feel rather awkward asking, but: why implement this inside of > Gallium, instead of as a standalone {egl,glX}SwapBuffers interceptor > obtaining counter values via GL extensions, such as > ARB_occlusion_query or AMD_performance_monitor? That way Intel (and > Nouveau?) people could also benefit from it. > > Best regards, > Alexander _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev