On Tue, Jun 29, 2010 at 5:36 PM, Roger Wright <[email protected]> wrote:
> One of our engineers has a modelling program that often takes several
> hours to crunch the data and render the output on his 32-bit office
> machine.

  You need to find out *why* it's slow.  :)

  For example, if it's CPU bound but doesn't care much for RAM or I/O,
then processor speed and architecture matter hugely.

  Even if you narrow it down to "CPU bound", a higher clock speed or
newer silicon doesn't always mean it will run faster.  I remember when
the Pentium 4 first came out, a lot of applications which were
compiled for the PIII actually ran *slower* on the P4, because the
optimization strategy was totally different.  There was a time when
Intel had better floating point performance, while AMD had better
integer performance (or maybe I have it backwards).

  It also depends on the processing model the program uses.  For
example, a single-threaded app will generally only benefit from a
higher instruction rate (clock speed), and multi-core matters not.  A
multi-threaded app will usually benefit *much* more from multiple
cores.  It's the difference between a 3 GHz single core processor and
a 4-way 1 GHz processor being the huge win.

  Or maybe CPU doesn't matter at all, and the modeling program is
actually doing a ton of reads and writes to process a dataset, and
what matters most is the speed of the disks and I/O subsystem.

  You get the idea.  :-)

  There are entire books written on the subject of "profiling".  I
just know some bits and pieces.  Enough to get me by so far.  One
reason I like Process Explorer's per-process graphs is they can help
even a non-expert like me see stuff.  If CPU is a sawblade and I/O
looks like a plateau (from a distance, anyway), chances are you need a
better disk subsystem more than you need a faster CPU.

-- Ben

~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ <http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/>  ~

Reply via email to