Hey all,
I asked a contact of mine at NVIDIA to look into this -- he's asked
that if one (or more) of you could post the technical details of this
issue to their forums:
http://forums.nvidia.com/index.php?s=a69d3235ab50fa3c72d5f36a946db277&showforum=23
that he'll get someone in the mac driver group to look into it.
If you do, please cc us on the URL and I'll copy it back to my nvidia
contact person so he can get the ball rolling...
thx!
On Feb 9, 2009, at 11:43 PM, Florian Albrecht wrote:
We do have rdar://6430376 listed for the issue originally mentioned
at the start of this thread. We created a special version of the
QCTV example for demonstration.
Greetings,
Florian Albrecht
Boinx Software.
On 10.02.2009, at 05:10, Troy Koelling wrote:
Of course it never hurts to have duplicate bug reports filed so the
decision makers know what priority to give, however this is a known
problem from the last time it came up.
Sent from my iPhone
On Feb 9, 2009, at 7:01 PM, Michael Diehr <[email protected]> wrote:
Is anyone able to submit a proper bug report on this? I'd be
happy to try, but I think my report would be lacking in details
"QC on NVIDIA chips is bad, mkay...uh...pls fix it"?
I'd hope that a proper bug report, one that suggest the right
fix(es) could get some attention?
On Feb 9, 2009, at 3:15 PM, vade wrote:
A little birdy mentioned this has to do with the image channel
ordering, and that the ATI drivers automatically swap the
channels to the necessary ordering to make things faster, using a
shader on the GPU. Apparently, something is sub optimal with ATI
for this.
Maybe its specific to QCs opaque handing of images (QCImage
datatype), but ive head that this holds true for other apps.
your tests seem to confirm this, and my testing on a colleagues
new MBP is that while its faster than the older NV 8600M
hardware, my x1600 still stomps it for CI/Video pipeline
processing.
The Raw GL calls though (vertex drawing/filling etc), are much
much faster than my x1600, so its a give and take.
On Feb 9, 2009, at 4:36 PM, Michael Diehr wrote:
Your benchmarks corelate with vram also. Have you tried 8600M
with the same vram as the ati cards?
We are just testing the ones we have, and don't have that
particular combo.
The tests we are doing should not be taxing VRAM, i.e. we are
using 720x480 internal pictures which works out to under 2MB per
image, and only a dozen or so images max. So I don't think we
are running out of VRAM.
I just ran another test comparing the ATI 1...@256mb vs. the
NVIDIA 8600 @ 128MB, this time doing a very simple test (3 video
inputs overlayed). Results: ATI = 30fps, NVIDIA = 18fps.
Again, 3 x 2MB per frame images should not be getting anywhere
near to 128MB VRAM total.
Now, perhaps the 256MB version of the 8600 has not only larger
VRAM but faster VRAM as well, in which case it'd perform
better? Still, my gut feel is that the NVIDIA chips or drivers
are not well suited for QC based on what I've seen.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com
This email sent to [email protected]