On 11/14/07, John Stanton <[EMAIL PROTECTED]> wrote:

> Threads simulated in software are a kludge to better utilize current
> processor and operating system architectures.  In time machines where
> the parallelism is handled in hardware will be more widely available and
> the threading will be transparent and highly efficient.

If the software task is expressed in a serial manner, like most
current programming methods today, how exactly is the hardware
supposed to magically parallelize it?  Expressing tasks capable of
being executed in parallel is very much a software problem.

(That actually happens to some extent already, with out-of-order CPU
instruction execution.  Some of it is transparent, and some of it
isn't: the various "memory models" implemented by CPUs drive a certain
class of programmers nuts, since the problems are even harder than
threading with shared state.)


On 11/14/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

> > If you machine has a single disk it fundamentally does not have parallel
> > I/O.  If you have a machine with multiple dik spindles and multiple
> > channels then you can have parallel access.  Multiple Sqlite databases
> > residing on the same disk are accessed sequentially because the access
> > depends upon the disk head positioning.

> It can be added that while disks can only perform one operation at a time, 
> modern disks have NCQ capabilities that enable them to reduce seek times by 
> using an elevator algorithm for example.
> The OS is also performing some optimizations when queuing up several I/O 
> requests to the same device.

Not to mention caching, by both the disk and OS.

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to