If you look at the original underlying NT I/O architecture that Cutler 
implemented - it is a thing of beauty that's based in async patterns, and not 
threads.

It was the Win32 wrappers over the NT subsystem that tried to make things 
"easier" for developers to deal with, which forced synchronous blocking code on 
top of the async Zw/Nt layers. This made the only practical way to deal with 
"blocking" I/O to be multi-threaded.

Today even junior-ish developers can deal with async code in node.js, and not 
bat an eyelid about it - the language makes async interaction simple enough - 
even in a single threaded environment. It wasn't the underlying technology was 
wrong, it was the simplifying abstraction on top of it that was.

If instead NT initially only exposed the Nt API's and not the Win32 layers, we 
would have had languages that simplified async a long time ago - and 
multi-threaded would be the domain of a few applications that actually need 
compute, and not just non-blocking IO. This wasn't due to Cutler's architecture 
though - more market driven decisions trying to maintain API compatibility with 
Win95, which was in turn driven by API compatibility with 16 bit API's, which 
long predated Cutler.

- Deon

-----Original Message-----
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of James K. Lowden
Sent: Thursday, February 16, 2017 6:26 PM
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] Thread safety of serialized mode

On Thu, 16 Feb 2017 21:49 +0000
Tim Streater <t...@clothears.org.uk> wrote:

> > What's inherently wrong with threads in principle is that there is 
> > no logic that describes them, and consequently no compiler to 
> > control that logic.
> 
> [snip remainder of long whinge about threads]
> 
> Sounds, then, like I'd better eliminate threads from my app. In which 
> case when the user initiates some action that may take some minutes to 
> complete, he can just lump it when the GUI becomes unresponsive.

[snip chest thumping]

You didn't refute my assertion, and facts refute yours.  

There has been a GUI in use for some 30 years, dating back to your VMS days, 
that is single-threaded.  I'm sure you've heard of it, the X Window System?  

If your particular GUI system is based on threads, like, say, Microsoft 
Windows, then, yes, you're pretty much cornered into using threads.  But that 
doesn't change the fact that you have no compiler support to verify the 
correctness of memory access over the time domain.  It doesn't change the fact 
that the OS has subverted the guarantees your language would otherwise provide, 
such as the atomicity of ++i noted elsewhere in this thread.  

WR Stevens describes 4 models for managing concurrency:

1.  Mutilplexing: select(2)
2.  Multiprocessing
3.  Asynchronous callbacks
4.  Signal-driven

None of those subvert the semantics of the programming language.  In each case, 
at any one moment there is only one thread of control over any given section of 
logic.  

Hoare had already published "Communicating Sequential Processes"  
(http://www.usingcsp.com/cspbook.pdf) when it hired David Cutler to design 
Windows NT.  It's too bad they adopted threads as their concurrency-management 
medium.  If they'd chosen CSP instead, maybe they wouldn't have set computing 
back two decades.  

--jkl


_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to