Hi Dan, > 1. Is it possible to manually control the advancement of time to > guarantee the rendering framerate, ie a fixed framerate that will also > control the playback of embedded multimedia elements?
Clutter has not been designed for this purpose, but if used with VSYNC it can work on fixed fps rate. However, this may not be what you're looking for (frame by frame renderer?). > 2. Is it possible to capture the frames of the display, as well as the > audio stream? As for the audio stream, it's nothing to do with clutter but gstreamer. As for the video stream, there's currently no easy way to do this. Gstreamer can be used to display video inside clutter, but clutter doesn't offer a video capture feature (only texture capture, usable for getting screenshots for example). I myself am working on the topic for a while, here's what i investigated sofar: * using OpenGL capture apps like yukon [1]; works but it's far from integrated, not to mention the fact that the capture calls slow down your performance and that the format isn't necessary standard, or very memory consuming (YUV/similar) * using gobject timeouts/clutter timelines to capture individual frames pixbufs using get_pixbuf calls (in clutter-0.6), then saved to hard drive; works but not reliably, notably in terms of precise timing. Not to mention that you need to re-create the video stream from image files, and mux it afterwards with the audio * i believe pippin wrote a patch for clutter that passes frames to ffmpeg some time ago, but this never went up trunk * creating the rendering area towards an FBO, and using the cogl get_data [2] API call (clutter-0.8) to fetch bitmap data; needs a custom gstreamer plugin for feeding a pipeline with these raw frames, probably based on appsrc/fakesrc and using the handoff signal to pass the frames. I haven't gone to implementation * my current approach is to try using the new gstreamer gl branch for the rendering part, which already lets you capture the OpenGL rendering to a file, in a glfilter plugin that uses a clutter thread, and manipulate the gst-gl opengl objects (video texture) using the Clutter API, hoping that the python gstreamer bindings will let me use python-written callbacks. I just began trying this recently As for C#, there are some bindings. Don't know about gstreamer's though... > 3. Is it possible to do all of these things when there may not be an X > server running, and there may not be audio drivers present in the > system. (this one would be nice, but is definately optional as long > as 1 and 2 are reasonable) Nope, you need OpenGL thus GLX thus X (except on mobile devices AFAIK). As for the audio drivers, gstreamer can work without audio hardware. > Any information that you can give me would be of great help, > especially examples and/or documentation of the api's required to > achieve my goals. Thanks! Hope this helps FLo [1] Yukon http://dbservice.com/projects/yukon -- seems down [2] http://www.clutter-project.org/docs/cogl/0.8/cogl-Textures.html#cogl-texture-get-data -- To unsubscribe send a mail to [EMAIL PROTECTED]
