On Wed, 2009-01-07 at 10:39 +0000, Andrew Church wrote: > >What we essentially need is a safe way for a filter _plugin_ (while I > >was referring to the core part handling the filtering) to deliver N > >*more* (cloned) frames to the core, right? > > I think you're misunderstanding my approach.
Uhm yes, I've realized that unfortunately I'm still missing some pieces. The main reason is maybe that I'm always trying to map in my mind your proposal into existing code and/or framebuffer model. Some things fit nicely in the model I'm building into my mind, some others not, but the main problem is... I'm thinking about a different thing! :) > It's not an issue of "more" > or "less" frames; I want to treat the two streams of frames as entirely > separate, so filters are free to add, delete, reorder, delay (as with > smartyuv and such), or do whatever they want and the core doesn't have to > worry about keeping track. Ok, that's one of my difficulties: I don't get what you mean with keeping track. The only thing coming into my mind is the sliding window approach we've discussed previously. On this topic, after a few more thinking, I'm start to convince myself that the best thing altogether is to remove the multithreaded filter layer and use just one and only one thread for the filtering stage. In order to help me understand the proposal, can you explain how the processing stages should be connected, and how they will exchange frames? I thought about some kind of fifo buffer for each processing stage (but one separate framebuffer for each stage instead just a big one as we do right now), and that is maybe misleading me. (That's the reason why I proposed to simply use framebuffer_put() and why I don't saw the need of doing full framebuffer copy: a set of separate framebuffer can be implemented using a central one) Bests, -- Francesco Romani // Ikitt http://fromani.exit1.org ::: transcode homepage http://tcforge.berlios.de ::: transcode experimental forge