The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.
[EMAIL PROTECTED] (Howard Brazee) writes: > Depending on one's definition of parallel programming, we have been > doing to various degrees since before they started off-loading the > paper-tape reading to the paper-tape reader. Video cards on PCs are > powerful computers that work in parallel with the program's main > logic. Our operating systems have allowed us to run payroll and > accounts payable at the same time, and central databases have expanded > on this ability. lots of comments about this in the past couple yrs is that technology in support of parallel programming has not really changed in at least the past 20yrs ... as a result, the actual use has been limited to very specialized implementations. there has been lots of stuff in multiprogramming and multithreading in the same processor complex (single processor and/or shared-memory multiprocessor). multiprogramming was managing lots of independent & different tasks on the same processor complex. multithreading was program managing different tasks. application implementation of multithreading isn't necessarily very pervasive. some number of DBMS implementation have used things like transaction model ... to provide "independent" operations ... that they multithread. In this sense, DBMS kernels are somewhat more like operating system kernels ... highly specialized ... and not a lot of end-users implementing their own DBMS kernels. in a lot of multiprocessor kernel support ... a "global" kernel lock was used ... which only allowed a single thread to be executing in the kernel at a time. it was somewhat painful experience for a lot of kernel implementations to make the transition from single thread (at a time) executing in a multiprocessor kernel to multiple concurrent threads executing in same parts of the kernel. long ago and far away, this was one of the battles getting the compare&swap instruction into 370 architecture. test&set had been around in the 60s and was used fro 360/65 multiprocessor support with global kernel spin-locks (set the lock and everybody else spins, untill the lock is cleared). at the science center http://www.garlic.com/~lynn/subtopic.html#545tech Charlie had been doing a lot of work in fine-grain lock for the cp67 kernel and invented the compare&swap instruction (mnemonic chosen because "CAS" are charlie's initials). misc. past posts mentioning SMPs and/or compare&swap http://www.garlic.com/~lynn/subtopic.html#smp somewhat implicit in a lot of compare&swap uses is that there can be concurrent threads executing in the same instruction sequences simultaneously. the inital forey into POK attempting to get compare&swap justified was unsuccesful, in large part because the favorite son operating system felt that test&set was just fine for multiprocessor support (the 360/65 smp global spin lock paradigm). the challenge was to create justification for compare&swap instruction that was applicable to single processor deployment. Thus was born the programming notes that can be found in principles of operation describing how the "atomic" characteristics of comapre&swap can be leveraged in single processor environment for multithreaded applications (like DBMS) ... these aren't necessarily concurrent multithreads ... but multiple threads that might be interrupted and so atomic operations can be applied to both simultaneous concurrent multithread operation as well as possibly non-simultaneous (but interrruptable) multithreaded operation. the advances in concurrent, parallel technology into loosely-coupled/cluster deployments is even more limited than the proliferation in tightly-coupled environments. we had done a scallable distributed lock manager in support of our ha/cmp product http://www.garlic.com/~lynn/subtopic.html#hacmp and the "medusa" cluster-in-a-rack activity ... old email http://www.garlic.com/~lynn/lhwemail.html#medusa and somewhat referenced in these postings about old meeting http://www.garlic.com/~lynn/95.html#13 http://www.garlic.com/~lynn/96.html#15 ... but again ... it tended to be directly used by a very limited amount of specailized code ... there wasn't a huge number of different applications directly implementing semantics of highly parallel operation (for either tightly-coupled or loosely-coupled configurations). a couple recent posts in another thread/fora on the subject http://www.garlic.com/~lynn/2007l.html#15 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#19 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#23 John W. Backus, 82, Fortran developer, dies ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html