The simple answer is that current mass-market machines and software
strongly resist parallel processing. Newer architectures allow for
massively parallel execution and support software which can take
advantage of it fairly transparently. Each one needs to be used in such
a way as to maximize its strengths.
The machine we are accustomed to currently work best when they are used
like a cat running across a busy road - take off by itself, look
straight ahead and run flat out. A parallel operation is more like a
herd of wildebeest swimming the Zambezi River. The slow swimming
wildebeest method gets the herd across faster than you could get your
fast running cats across the road. However the cats cannot handle the
wildebeest method and the wildebeest would be road kill if they tried
the cat approach.
Trevor Talbot wrote:
On 11/14/07, John Stanton <[EMAIL PROTECTED]> wrote:
Threads simulated in software are a kludge to better utilize current
processor and operating system architectures. In time machines where
the parallelism is handled in hardware will be more widely available and
the threading will be transparent and highly efficient.
If the software task is expressed in a serial manner, like most
current programming methods today, how exactly is the hardware
supposed to magically parallelize it? Expressing tasks capable of
being executed in parallel is very much a software problem.
(That actually happens to some extent already, with out-of-order CPU
instruction execution. Some of it is transparent, and some of it
isn't: the various "memory models" implemented by CPUs drive a certain
class of programmers nuts, since the problems are even harder than
threading with shared state.)
On 11/14/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
If you machine has a single disk it fundamentally does not have parallel
I/O. If you have a machine with multiple dik spindles and multiple
channels then you can have parallel access. Multiple Sqlite databases
residing on the same disk are accessed sequentially because the access
depends upon the disk head positioning.
It can be added that while disks can only perform one operation at a time,
modern disks have NCQ capabilities that enable them to reduce seek times by
using an elevator algorithm for example.
The OS is also performing some optimizations when queuing up several I/O
requests to the same device.
Not to mention caching, by both the disk and OS.
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------