> > You know what they found out with all of the hundreds of millions of
> dollars
> > they spent?  Dedicated hardware still does it faster and cheaper.  Period.
> > It's just like writing a custom routine to sort an array will pretty much
> > always be faster than using the generic qsort.  When you hand-tune for a
> > specific data set you will always get better performance.  This is not to
> > say that the generic implementation will not perform well or even
> acceptably
> > well, but only to say that it will never, ever, ever perform better.
> 
> Here you are comparing different algorithms.  A custom sort algorithm will
> perform much better than a standard qsort.  I agree.  Implementing something
> in hardware does not mean it uses a more efficient algorithm however.  A
> hardware implementation is just that, an implementation.  It does not change
> the underlying algorithms that are being used.  In fact, it tends to set the
> algorithm in stone.  This makes it very hard to adopt new better algorithms
> as they are invented.  In order to move to a better algorithm you must wait
> for a hardware manufacturer to implement it and then fork out more money.
> 
> Dedicated hardware can do a limited set of things faster.  There is no way
> to increase its capabilities without purchasing new hardware.  This is the
> weakness of having dedicated hardware for very specific functionality.  If a
> better algorithm is invented, it can take an extremely long time for it to
> be brought to market, if it is at all, and it will cost yet more money.
> Software has the advantage of being able to implement new algorithms much
> more quickly.  If a new algorithm is found to be that much better than the
> old, a software implementation of this algorithm will in fact outperform a
> hardware implementation of the older algorithm.  Algorithms are at least an
> order of magnitude more important than the implementation itself.
> 
> -Raystonn

Yes.  Choosing the correct (best) algorithm for a given problem will
reduce the calculation cost with the most significance.  Yes, once
a piece of silicon is etched, it is 'in stone' as to the featureset
it provides, and yes, if you want the latest and greatest featureset
in silicon you'll always have to fork out more money.  That's how
it's always been, and will always be.

However,  none of the commodity general purpose cpus are designed for
highly parallel execution of parallelizable algorithms--which just
about every graphics operation is.  How many pixels can a 2GHz Athlon
process at a time?  Usually just one.  How many can dedicated silicon?  
Mostly limited by how many can be fetched from memory at a time.
Thus, the algorithm is _not_ always an order of magnitude more
important than the implementation itself--especially if a parallelized
implementation can provide orders of magnitude more performance than
a serial implementation of the same or an even superior algorithm.

It remains a fact that in many cases where graphics algorithms are
concerned, even less efficient algorithms implemented in a highly
parallel fashion in specialized silicon (even _old_ silicon--voodoo2)
can still significantly outperform the snazziest new algorithm
implemented serially in software on even a screaming fast general
purpose cpu.  (see the links in the thread to the comparisons of
hardware w/a voodoo2 vs software w/an athlon 1+ GHz)


Nick


_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to