Hi all, I would like to develop a SciPy toolkit for GPU-accelerated computation. Tristam MacDonald recently posted a neat Shader class exposing GLSL, which provided me with a good starting point. I have a couple of further questions, and would be glad for any help:
1) How do I implement offscreen rendering? It seems like Framebuffer Objects provide the best cross-"platform" solution, but Pyglet does not explicitly provide an API to access those. 2) How do I render a single frame and grab it (instead of rendering 60 fps until I'm reasonably sure the computation is done). 3) Are double-precision buffers available, or can I only do single-precision fixed-point processing in GLSL? Thank you for all your hard work on exposing OpenGL to the masses. Pyglet is great! Kind regards, Stéfan --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "pyglet-users" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/pyglet-users?hl=en -~----------~----~----~----~------~----~------~--~---
