Ian Howson wrote:
<snippety snip>
By the way, is it not true that 'pipelining' that's a feature of
x86 CPU's starting
with i586 which I have pointed out in one of my previous post is
(another name)
implementation of 'parallel' processing ?
So, in this way there is true parallelism in x86 arch.
Holy shit! You are soooo off base here its not funny. 'More than one
thing per clock cycle' -> What do clock cycles have to do with
parallelism? Nothing. Concurrency means 'concurrent'. If two operations
complete in one clock cycle *in series*, then its not parallel. Its
fast, but not parallel.
Have a look at the intel documentation on Intel IA-32 Architecture -
Series x86. You
will be pleasantly surprise.
Just about every CPU in existence is pipelined. It's an implementation
technique, nothing more. It doesn't change the programmer-visible
model of the CPU, which is a machine which will execute instructions
one at time. The programmer doesn't have to change their software to
account for various pipelining structures. As far as the programmer is
concerned, instructions go through one at a time, in order.
This is one source for my information:
http://academic.eng.au.edu/ce3105/ArtofAssembly/CH03/CH03-3.html#HEADING3-192
According to the above
Pipelining( http://en.wikipedia.org/wiki/Pipelining) first appeared on
the 80486 but with no superscalar operation
(http://en.wikipedia.org/wiki/Superscalar).
Pipelining with superscalar operation first appeared on 80686 (This is
also often referred to as i586+).
It is pipelining combined with superscalar operation that makes a
single CPU chip executes
appropriate multiple instructions in a parallel sense. I think the
misunderstanding was that I
did not mention supercalar. If an CPU instruction requires 1 clock
cycle, with pipelining
and superscalar, in 1 clock cycle (which is a linear time scale the
duration of which is dependent
on the clock speed of the CPU, e.g. 1.5GHZ has shorter clock cycle that
1.0GHZ CPU) it is
possible to implement multiples of these instructions .
Patterson and Hennessy's "Computer Organisation and Design" covers
this pretty well in chapter 5, "The Processor: Datapath and Control".
Most discussion around parallelism nowadays is with the aim of
increasing total application performance by adding processing units.
True. But I was responding to a comment as follows:
You can get away with it because of the clever way in which a CPU does one thing at a
time; there is no "true"
parallelism.
Parallelism is not a desirable attribute on its own. The parallelism
you refer to within the CPU manifests itself as greater CPU
performance (more instructions executed per unit of time). Using
multiple CPUs together to achieve greater aggregate performance is a
fairly difficult problem nowadays due to the interactions between
threads, and is most definitely programmer-visible (despite what Intel
and Sony will assert in their marketing material).
I have no issues with the idea that as far as the impact on throughput
is concerned most users are concerned with parallel computing
using multiple CPUs.
O Plameras
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html