As I understand it (others may correct me), When it does direct CI
rendering, every on-screen pixel is a pixel for CI to process.
Therefore if your pixelated image is 16x16 and you are displaying it
at 256x256 it's going to take as much VRAM and processing power for
the 256x256 pixels, rather than having OpenGL scale up the 16x16
texture.
On Mar 21, 2008, at 2:01 PM, Christopher Wright wrote:
You need to bypass the QC scaling: On the Billboard select Native
Core Image Rendering, then it will respect the image(Filtering)
input which is revealed when you enable Show Advanced Input Sampler
Options on any CI filter.
Woah, that works too! Thanks :) Back in October I asked this, and
was told "Not Possible" -- glad that's not entirely the case.
http://lists.apple.com/archives/Quartzcomposer-dev/2007/Oct/msg00125.html
Is there a reason to prefer the other method (the one I described a
bit ago) over this one (which seems more elegant)?
[ditto what vade just asked I guess]
--
[ christopher wright ]
[EMAIL PROTECTED]
http://kineme.net/
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com
This email sent to [EMAIL PROTECTED]