phil chastney <[EMAIL PROTECTED]> writes:
> I'm happy to say I never worked on Sigma 7's APL -- the people I
> worked alongside would frequently get messages along the lines of
> "System Crash, backups lost, all files restored as at last Monday"
> -- the sites I worked at found it more cost-effective to use
> external time-sharing: expensive but reliable
>
> I don't have any assembler, but somewhere I have a copy of an
> informal write-up of the internals of the Sigma's interpreter -- its
> APL was pretty much in line with APL/360 -- native files only, no
> quad-functions for files or formatting, IIRC -- and no shared
> variables, I believe
>
> the documentation was interesting -- more informal but, at a certain
> level, more informative that IBM's Logic Manual

iverson and falkoff were at the phili science center and phili
was supporting apl\360. cambridge science center
http://www.garlic.com/~lynn/subtopic.html#545tech

took apl\360 and ported it to cp67/cms and virtual memory for cms\apl.
apl\360 installations typically provided 16kbyte to 32kbyte
workspaces.  part of cms\apl was moving apl\360 interpretor to (large)
virtual memory environment. this initially ran into big problem with
apl\360 storage allocation ... on every assignment would allocate new
storage location ... and when memory was exhausted, would perform
garbage collection and compact storage allocation. this wasn't bad on
16k-32k real-storage workspace paradigm where the whole workspace was
always swapped in total. however, for a possibly couple megabyte
workspace in paged virtual memory ... this was guaranteed to quickly
touch every virtual page ... regardless of the aggregate size of the
variables (if it ran long enuf with enuf assignment operations). this
would quickly exhibit page thrashing appearance (and touch every
virtual page) in the configurations of the period.

one of the things used was the early precursor to vs/repack (before it
was released as product ... also done by cambridge) which monitored
all data fetch and stores and all instruction fetches (which was also
used for hotspot execution analysis). i've commented before that we
had these floor-to-ceiling "plot" printouts that ran down the office
corridor; time was along the horizontal (running down the corridor),
and storage address was vertical (giving storage location fetch/store
over time). apl had this very saw tooth appearance ... a sloped black
band that quickly went from bottom of storage to top of storage very
quickly and then a solid vertical band where garbage collection was
performed. the whole thing had to be rewritten for virtual memory
environment.

the other thing that was done for cms\apl was to allow it to directly
invoke system calls. this caused quite a bit of heart burn in philly
since it violated apl purity. however it (along with large virtual
address spaces) allowed some real applications to be done. eventually
we had the business people in armonk loading all the (exremely
sensitive) customer sales & install information and using the
cambridge apl facilities to perform business analysis and planning
(aka apl was being used for a lot of stuff that spreadsheets are
commoningly used for today).

this also created something of a significant security issue since the
data was the most sensitive/valuable the company had ... and the
cambridge system also allowed a lot of students from various univ. in
the area (mit, harvard, bu, etc).

this also opened the way for the HONE APL applications that eventually
was the basis for worldwide sales and marketing support (the US hone
vm370 datacenter system consolidated in northern cal. in the late 70s
had nearly 40k user definitions, and there were clones of the system
all over the world) ... misc. past HONE and/or APL posts
http://www.garlic.com/~lynn/subtopic.html#hone

the system call abomination violation of apl purity was eventually
resolved with the introduction of shared variable paradigm ... and
apl\sv.

before that, the palo alto science center had taken cms\apl and done a
number of enhancements in the vm370 time-frame and produced
apl\cms. they also had done the 370/145 apl microcode performance
enhancment (lots of apl\cms applications on 370\145 with microcode
assist ran at thruput of 370\168 ... aka nearly factor of 10
improvement).

and a repeat from a previous post:

here is falkoff's "The IBM family of APL systems"
<a
href="http://www.research.ibm.com/journal/sj/304/ibmsj3004C.pdf";>http://www.res
earch.ibm.com/journal/sj/304/ibmsj3004C.pdf</a>

for some drift ... the vs/repack product in addition to doing storage
fetch/storage capture and plots ... also provided semi-automated
application re-organization ... optimizing for virtual memory, paged
environment. misc. past vs/repack posts:
http://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
http://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
http://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
http://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off
topic)
http://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the
machine word size ...)
http://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
http://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
http://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
http://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login
Names
http://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
http://www.garlic.com/~lynn/2002f.html#50 Blade architectures
http://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
http://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
http://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
http://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... --
Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
http://www.garlic.com/~lynn/2004.html#14 Holee shit!  30 years ago!
http://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
http://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
http://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
http://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
http://www.garlic.com/~lynn/2005.html#4 Athlon cache question
http://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before
the ANSI C standard
http://www.garlic.com/~lynn/2005d.html#48 Secure design
http://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
http://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
http://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
http://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
http://www.garlic.com/~lynn/2005o.html#5 Code density and performance?
http://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Reply via email to