On Nov 19, 2010, at 3:19 PM, Christopher Wright wrote:
>> This uses QCRenderer -initOffScreenWithSize:colorSpace:composition: which
>> would fit the bill perfectly, but it unfortunately doesn't seem to work for
>> compositions that access a camera, it returns blank white images, maybe
>> because it doesn't wait for the camera to come on? Is there a way to modify
>> this to work with a camera?
>
> It _might_ need a runloop to work? (I'm totally making that up, but it
> sounds plausible)
>
> If you sleep for a few seconds before driving a render and capturing the
> output, does it give you valid data? If so, it's just a timing thing (and
> unfortunately, there's no way to make the video input patch work
> synchronously). If it still doesn't, I'd look into setting up a simple
> runloop ("simple" in this context is quite misleading, unfortunately) and see
> if that helps.
>
> With setups like this, I've often seen people write their own capture code
> (using QTCapture sessions or something?), and then feed frames to QC as they
> come in. It's nowhere near as simple as using the Video Input patch, but it
> has the added bonus of doing what you want without timing issues complicating
> matters.
Alas, adding a runloop and sleeps don't help. It seems the problem is internal
to the -renderAtTime: method. It doesn't check if the comp uses the Video Input
Patch, and so doesn't wait for the camera to come on. If the video patch had a
synchronous flag, that might fix it. Guess I'll try to find another route...
Thanks
-Jon
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com
This email sent to [email protected]