On 10 Aug 2006, at 6:30 AM, Tyler Turner wrote:
Did you
read the user accounts I pointed out in which other
users reported improvement after switching to video
cards with reputations for better 2D graphics? People
are saying that performance improved for them. Are you
just unwilling to believe them?
Yes. How many times do I have to say this Tyler? I don't trust "user
accounts." Subjective user reports are notoriously unreliable. That's
not evidence. People also report that taking a placebo reduces their
allergy symptoms, improves their sex life, etc. If someone's just
bought a new video card, of course they are going to want to believe
that it actually makes their user experience better, even if it's not
the case.
They're not interesting
because the audience that's reading these reviews
doesn't care about 2D performance.
But listen... lots of people obviously do care about 2D performance.
If there is such a great variance in 2D performance on modern video
cards, as you say, where are the comparative reviews (with
benchmarks) to bear this out? If one video card gives superior
performance when scrolling a Photoshop image, wouldn't you expect the
company that makes that card to tout it?
I've pointed out the only
benchmarks I could find, and they showed a huge
variance.
On non-current apples-and-oranges hardware, in a test that may or may
not reflect measurable real-world use.
I've shown strong reason to believe that this variance
would still exist.
Sorry, Tyler, but no you haven't. All you've shown is that you
personally strongly believe this variance still exists.
You show me why
having a faster overall processing speed has stopped
improving performance.
Because -- as I've said before a whole bunch of times -- 2D graphics
performance is CPU-bound, not GPU-bound. Unless you have some sort of
massive mismatch -- like, say, a quad 3.0 GHz Xeon with, say, a Rage
128 -- the CPU is always going to be the bottleneck in 2D drawing.
It's really not all that mysterious. The same phenomenon exists in 3D
applications as well. If you have an older computer -- say a 2.0 GHz
Pentium 4 with a Radeon 9700 -- and you're trying to play the latest,
most demanding 3D games, like, say Quake 4 -- replacing the older
video card with a Radeon X1900 won't improve your Quake framerates at
all. Why not? Because your slow CPU can't feed the GPU fast enough
to make a difference.
CPU-bound vs. GPU-bound is a topic that's widely discussed in video
card reviews and on tech sites like Ars Technica. If your application
is CPU-bound, adding a faster graphics card doesn't help you. Adding
a faster graphics card only helps when your graphics card can't keep
up with the instructions your CPU is sending it.
I've shown that the video card in my system is a
bottle-neck for Finale.
Again, no you haven't. You've shown that you believe that your video
card is a bottleneck. And I believe that you believe your video card
is a bottleneck. I'm just skeptical that this is actually the case,
because the video card is almost never the bottleneck in 2D drawing.
Tyler, I honestly do not want to be unduly
confrontational here, but
seriously, you are the person who did not know the
difference between
the 7300 GS and the 7300 GT, and earlier tried to
cite GS benchmarks
as if they were representative of GT performance.
Wrong, Darcy, but nice try. I didn't specify the
7300GT
Look, Tyler, this is not a big deal, really, but you either made a
mistake or intentionally made a misleading statement. You wrote:
To be perfectly honest I would not be surprised if the
Radeon 9700 could still compete effectively with
Nvidia's 7300. According to this page, it outperforms
it.
http://freestone-group.com/video-card-stability-test/benchmark-
results.html
The only "7300" we'd been talking about is the 7300 GT -- the one
used in the Mac Pro. But there are no 7300 GT benchmarks on the page
you linked to, only 7300 GS benchmarks. No one had previously said
anything about the GS. And the 7300 GT easily outperforms the Radeon
9700.
Okay, listen. I have two computers here. One runs at
3.2GHz (computer A), the other at 3.0GHz (computer B)
(both using same generation P4's) and both configured
1024X768 32-bit color. Taking the same large file into
a freshly installed PrintMusic 2006 demo on both
computers, I drag the screen around with the graphic
acceleration turned all around. As expected, the
machines are very similar, with the 3.2GHz machine
taking an almost imperceptible performance lead.
Now I turn the acceleration all the way up. Suddenly,
the performance of computer B jumps way ahead of
computer A, despite the fact that it has the slightly
slower processor. Redraws are, I'm estimating, about
50% faster. To be sure, both computers take a HUGE
performance hit when turning up the graphic
acceleration, but for one computer the hit is much
smaller than the other.
Why?
The only meaningful comparison would be to try the same graphics card
in each of your computers, using the same application, the same
slider settings and the same resolution. Do some scroll and/or redraw
tests with a stopwatch.
Cheers,
- Darcy
-----
[EMAIL PROTECTED]
http://secretsociety.typepad.com
Brooklyn, NY
_______________________________________________
Finale mailing list
[email protected]
http://lists.shsu.edu/mailman/listinfo/finale