Hello everybody,

I have to admit that I have always been disappointed to the performance
of my iMac running Debian, and have a gut feeling that it is faster
under OS X. Yesterday, I found by chance a "System Profiler and
Benchmarks" facility in the GNOME Sytem menu, and ran it on my iMac and
a intel-based laptop. The benchmark provides additional references
values for a Celron M and a PowerPC 740/750, which I also have included.

The G5:
Processor       PowerPC PPC970FX, altivec supported (1800,00MHz)
Memory  745MB (204MB used)

The laptop:
Processor       2x Intel(R) Core(TM)2 CPU L7400 @ 1.50GHz
Memory  1547MB (258MB used)
(Apparently, the benchmark is not parallellised and uses only one core).


Benchmarks:

Higher is better                         Zlib   MD5     SHA1
PowerPC PPC970FX (1800,00MHz)           15701    18       19
Core(TM)2 CPU L7400 @ 1.50GHz           12611    49       56
Intel(R) Celeron(R) M 1.50GHz            8761    38       49
PowerPC 740/750 (280.00MHz)              2150     7        6


Lower is better                 Fibonacci       Blowfish        Raytracing
PowerPC PPC970FX (1800,00MHz)          18             62                25
Core(TM)2 CPU L7400 @ 1.50GHz           6             21                29
Intel(R) Celeron(R) M 1.50GHz           8             26                40
PowerPC 740/750 (280.00MHz)            58            172               161


The result is that on some particular types of computations, the G5
performs extremely bad: something like twice slower as an old 1.5 Ghz
Celeron machine. For some other tests, the performances of the
processors scale with the frequency.

Is is a known characteristic of the G5, or are there specific parameters
to switch on in the kernel or whereever else to get the expected
performance of a 1.8 Ghz chip?


Have a nice day,

-- 
Charles Plessy
Tsurumi, Kanagawa, Japan


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to