> This uses QCRenderer -initOffScreenWithSize:colorSpace:composition: which
> would fit the bill perfectly, but it unfortunately doesn't seem to work for
> compositions that access a camera, it returns blank white images, maybe
> because it doesn't wait for the camera to come on? Is there a way to modify
> this to work with a camera?
It _might_ need a runloop to work? (I'm totally making that up, but it sounds
plausible)
If you sleep for a few seconds before driving a render and capturing the
output, does it give you valid data? If so, it's just a timing thing (and
unfortunately, there's no way to make the video input patch work
synchronously). If it still doesn't, I'd look into setting up a simple runloop
("simple" in this context is quite misleading, unfortunately) and see if that
helps.
With setups like this, I've often seen people write their own capture code
(using QTCapture sessions or something?), and then feed frames to QC as they
come in. It's nowhere near as simple as using the Video Input patch, but it
has the added bonus of doing what you want without timing issues complicating
matters.
--
Christopher Wright
[email protected]
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com
This email sent to [email protected]