Re: What happened to resumable instructions?

2008-02-07 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] writes:
 I just noticed this thread today.  I see there is a lot of speculation
 and incorrect statements, so let me try to clarify.

 First, from a processor implementation point of view, there is no
 problem with the old-style MVCL/CLCL interruptible instructions.  At
 appropriate points based on pending interrupts and potential page
 crossings leading to an access exception, the millicode exits from
 their execution and sets the GRs and PSW appropriately to continue
 later.  There is no risk of checkstops and it is perfectly well
 architected to handle a page fault in the middle, as some have
 speculated.

 The reason for the new CC3-style interruptible instructions is very
 simple.  It was requested by software developers (both internal and
 external to IBM).  It allows more flexibility in handling other system
 activity that is not interrupt driven.  So for example, software can
 go off and perform some housekeeping while a long running MVCLE is
 executing.  Note that POPS requires the processor to exit with a CC3
 every (approximately) 4KB processed.  So for a multi-page move, the
 overhead of starting and stopping can actually slow down the
 throughput of the move very slightly.

 I would expect that all future interruptible ops be of the CC3-style.
 That said, there will be at least one new interruptible version of an
 existing instruction announced soon, however, it is an instruction
 never used by the vast majority of software developers.

re:
http://www.garlic.com/~lynn/2008c.html#67 What happened to resumable 
instructions?

long storage-to-storage operations with lots of accesses back to real
storage ... is costing ever increasing number of processor cycles (as
mismatch between processor speed and memory speed increases) ... recent
reference
http://www.garlic.com/~lynn/2008c.html#92 CPU time differences for the same job

one could claim that nearly the same effective results (cc3-style) could
be achieved for old-style interruptable instructions under program
control ... but requiring a few more registers for loop control ... i.e.
actual length is kept in other registers and the lengths used for
mvcl/clcl instructions being limited (to 4096).

there are other thruput issues associated with long storage-to-storage
operations; even back to working on original mainframe tcp/ip product
implementation.

at the time, some of the competitive tcp/ip implementations were looking
at 5k pathlength and five buffer copies ... and there was comparison
with something like 150k instruction pathlength and 16 buffer copies for
lu6.2.  at that time, assuming 8kbyte NFS size buffer ... the processor
overhead for the 16 (LU6.2) 8kbyte storage-to-storage buffer copy
operations exceeded the processor time for the rest of the pathlength.

the other factor that some processors provided was cache-bypass,
storage-to-storage instructions. Large number of significant sized
storage-to-storage operations ... not only has ever increasing
significant overhead in terms of processor cycles ... but can have an
extremely detrimental effect on cache occupancy (the actual data in most
of the buffers has little probability of ever being needed in the cache,
but replacing data that would be needed).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CPU time differences for the same job

2008-02-07 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


Neal Eckhardt [EMAIL PROTECTED] writes:
 We also had the batch window running so long that we were missing
 deadlines, and the CIO asked me to review the program stream to see
 what could be done to make it run faster (yea, good luck with that,
 they were all database programs and that's a different group). I told
 him that our new CPU (of essentially the same speed) that was due to
 arrive in two weeks had two processors rather than the four processors
 our current machine had, and the problem would probably be resolved
 with the installation of our new CPU.

re:
http://www.garlic.com/~lynn/2008c.html#88 CPU time differences for the same job

the issue raised in post about disk thruput ... was over a period of
years, processors got around 50 times faster in a period that disks only
got 3-4 (maybe 5) times faster. Some claims are that the relative
abundance of processor power led to a lot of (processor) implementation
inefficiencies ... which could be relatively straight forwardly improved
if anybody were to pay any attention. 

as circuit size decreased ... and processor thruput increased ... lots
of signal latencies started to play larger and larger factor. the
latencies (measured in number of processor cycles) that played a part in
processor/disk thruput ... started to also dominant processor  real
storage ... and in much the same way that real storage started to be
used as caches to compensate for disk thruput ... processor caches
became more  more important compensating for memory latency. in fact,
using processor cycles as a measure of memory access time ... became one
way of easily recognizing the problem (memories got faster, but
processors got much, much faster) ... especially when references were to
cache-miss and memory latency measured in possibly thousands of
processor cycles.

for little drift, recent posts mentioning improving application hitting
(overnight) batch window limits
http://www.garlic.com/~lynn/2008b.html#74 Too much change opens up financial 
fault lines 
http://www.garlic.com/~lynn/2008c.html#24 Job add for z/OS systems programmer 
trainee

and for something totally different ... recent discussion of electronic
commerce implementations running into problems hitting overnight window
limit
http://www.garlic.com/~lynn/2008c.html#85 Human error tops the list of security 
threats

multitasking and multithreading have been used in the past attempting to
mask disk access latencies ... keeping processor busy while something
was wait for disk access.

earlier in this decade ... there were chip hyperthreading solutions
also looking at keeping processor functional units busy
... compensate/mask cache-miss  memory access latency. the number of
processor functional units weren't actually increased ... but there was
hardware emulation of two processors ... basically two instruction
streams, two sets of registers, etc ... but actual execution being done
by common functional units.

possibly the original in this genre was a project to do a dual i-stream
implementation for 370/195 (i.e. emulation of two-processor smp)
... which never actually shipped as product. the issue was that 195 had
a 64 instruction pipeline that could execute instructions concurrently
with common set of functional units. the pipeline could go a long way
towards masking the difference in processor thruput and memory latency
(w/o actually having a cache). The problem was that the amount of
processor logic/memory supporting this function was rather limited.
These days, chip implementations may have several hundred positions for
dealing with branch prediction and speculative execution (and backing
out instructions not actually branched to). In the 370/195, except in
very special cases, branches would drain the pipeline. Normal codes,
with typical branch useage, resulted in 195 running at half peak thruput
(it took careful programming to keep 195 operating at peak thruput).
The idea behind the dual i-stream was to have two sets of independent
instruction streams ... both operating at half peak thruput ... but
combined, capable of keeping the 195 pipeline fully stocked with
instructions.

misc. past posts mentioning dual i-stream as (one of the) mechanism for
compensating/masking increasing memory latencies (as measured in
processor cycles)
http://www.garlic.com/~lynn/94.html#38 IBM 370/195
http://www.garlic.com/~lynn/99.html#73 The Chronology
http://www.garlic.com/~lynn/99.html#97 Power4 = 2 cpu's on die?
http://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
http://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT Hyperthreading
http://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel 
information.
http://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
http://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past

Re: CPU time differences for the same job

2008-02-06 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Tom Schmidt) writes:
 I've been running VM more off than on since PLC 5 and I'm certain that
 the behavior that I referenced WAS in VM... at some point.  But if you
  Lynn Wheeler say it isn't there now, I'll believe you (unless/until
 I can prove you wrong, of course).
  
 But I know back in the VM/HPO or (maybe) early VM/XA days it was true
 that VM put itself into a tiny loop while it waited for work.  The
 loop was in a unique-to-VM PSW key so that the hardware monitor (the
 speedometer) could tell the difference between work and wait.

there were a number of specific environment experiments done in that
time frame ... for one reason or another.

one of the first was for acp/tpf on 3081. acp/tpf didn't support
multiprocessor support ... and there wasn't going to be a
non-multiprocessor machine.

normally to simulate a privilege instruction (not handled by the
micrcode) ... interrupt into the vm kernel, do the simulation, and
return to the virtual machine. this overhead will tend to be constant
from run to run ... directly part of doing work for the virtual machine.
over the years, attempts were made to get this as small as possible
and/or have it done directly in the hardware of the machine.

the other overhead is the cache/paging scenario ... fault for page not
in memory and there is overhead to bring the page into memory. this is
analogous to cache miss ... and the program appears to execute slower
because of the latency to process the cache miss. this can be variable
based on other activity going on in the real machine (analogous reasons
for both cache misses and page faults).

in the acp/tpf scenario ... if essentially just about the only workload
was acp/tpf ... the 2nd 3081 processor would be idle. so there was a
hack developed for things like SIOF emulation ... interrupt into the
kernel, create an asyncronous task for SIOF emulation, SIGP the other
processor and return to acp/tpf virtual machine. Creation of asyncronous
task, signaling the other processor, taking the interrupts, plus
misc. multiprocessing locking/unlocking drove up total avg overhead by
ten to fifteen percent. However, the SIOF and ccw translation offloaded
to run asyncronously on the idle 3081 resulted in net thruput benefit
for the single acp/tpf scenario.

The problem was that the implementation drove up the total avg
overhead by ten to fifteen percent for every customer running VM on
multiprocessor ... even those where the other processors weren't idle.

For pure topic drift ... there is something analogous going on in the
current environment with multi-core processors being introduced into the
desktop/laptop (personal) computing environment.

Eventually, 3081 with the 2nd processor was removed was announced as
3083 (for acp/tpf customers). Since 3081 still had the cross-cache
chatter 10 percent cycle slowdown scenario i've described for 370
multiprocessor ... 

they were able to run the single 3081 (aka 3083) processor nearly
15percent faster. even later still, acp/tpf eventually supported
multiprocessor.

Active wait was another such experiment ... where a specific hardware
configuration and workload gained a couple percent if the system
effectively polled for something to do.

from long ago and far away

To: wheeler
DATE: 04/19/85 20:58:47
 
On 4/10/85 xx presented his latest results to management and others
and I thought you might be interested to hear how we stack up against
HPO.  These are runs of VM/XA SF1 (which is Mig. Aid releases 3 and 4
rolled up into one package now), with about 2K LOC of enhancements to
boost the performance.  The enhancements include processor-local true
runlists and active wait, with a master-only runlist also.  They also
include a significant rework of the drum paging code and rework of the
SSKE code (for non-resident pages only?).  And other things which I just
forget now.  All these things collectively saved a whole lot of
execution time.
 
As a  result, SF1 now can handle 80% of the number of CMS users
that HPO can handle, whereas earlier it was only about 60% as many as
HPO.

... snip ... 

now, the HPO base they are referring to still has the 10-15 percent
multiprocessor penalty that had been introduced for the acp/tpf
environment. There were also a list of a dozen or so other carefully
chosen workload and configuration items to try and weight the comparison
in VM/XA SF1 favor (CMS workload truely trivial, trivial paging
activity, homogeneous well-behaved CMS workload ... but lots of them).
I don't remember the exact VM/XA SF1 processor cycles for active wait
trade-off vis-a-vis actually being in wait state (and VM/XA was a
totally different implementation from VM/HPO).

The active wait was along the same lines as the delayed queue drop
fix from the same era. There was a bug in identifying idle activity
and dropping idle tasks 

Re: CPU time differences for the same job

2008-02-06 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Phil Smith III) writes:
 This entire discussion is interesting to a VMer.  VM has always done
 VTIME and TTIME: VTIME is the CPU time used by the guest, TTIME is the
 VTIME plus CP overhead.  VTIME should be repeatable independent of
 system load; TTIME is what varies.  VMers talk about T/V ratios,
 i.e., How well are we doing in terms of overhead? (Goal=1.00)

TTIME overhead is frequently viewed as CPU that would not be there if
ran on the real machine (w/o vm). VTIME should be approx. the cpu time
running on (bare) real machine (w/o) (although it has all the vaguries
and caveats with respect to cache misses, interrupts, etc).

the jokes have been (for the other mainframe systems) that if customers
could run their applications with and w/o the underlying (mainframe)
operating systems ... there would be much less bloat in those
systems. in vm case, there was significant more pressure to have highly
optimized implementation (because of the with and without issue).

as a result ... there were various scenarios over the years ... where if
customers could configure their other mainframe operating systems to
rely on the underlying VM for various functions ... things actually ran
faster than on the bare machine.

one of those was VS1 handshaking ... where it was possible to turn off
VS1 virtual paging ... and turn over the responsibility to VM ... it
wasn't just that it made VS1 run faster (when running in virtual
machine) ...  but actually could have VS1 running faster ... than when
running w/o VM at all.

other recent posts related to this subject:
http://www.garlic.com/~lynn/2008c.html#78 CPU time differences for the same job
http://www.garlic.com/~lynn/2008c.html#80 Random thoughts
http://www.garlic.com/~lynn/2008c.html#81 Random thoughts
http://www.garlic.com/~lynn/2008c.html#82 CPU time differences for the same job
http://www.garlic.com/~lynn/2008c.html#83 CPU time differences for the same job

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CPU time differences for the same job

2008-02-06 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Ted MacNEIL) writes:
 Most of the time, I had already told them that there was an I/O
 bottleneck, tape-drive contention, scheduling, etc. issue.  But, they
 told me to upgrade the processor, anyways.  Yes, we usually needed the
 processor, but the other issues would usually give us a better
 improvement.

starting in the mid-to-late 70s ... i was observing that systems were
becoming more i/o bound ... and increasing real storage was more and
more being used to compensate for i/o limitations.

part of the issue was the work that i was doing on dynamic adaptive
resource scheduling ... included some stuff to dynamicly adapt to
resource bottlenecks. real storage, paging i/o and processor time was
becoming less  less frequently primary bottleneck.

along the way i had made the comment that relative system disk thruput
technology had declined by an order of magnitude over a number of years.
at some point this drew the attention of some execs in the disk division
... who asked their performance group to refute the statement.

after a couple weeks, the group came back and said that i had somewhat
understated the situation. part of the analysis was eventually reworked
as disk thruput recommendations for a share presentation.

old post with acknowledgement from b874 at share 63:
http://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)

old post with summary from presentation b874 at share 63:
http://www.garlic.com/~lynn/2002i.html#18 AS/400 and MVS - clarification please

other post with reference to b874
http://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s

in the 80s there was increasing shift to using real storage to
compensate for increasing disk i/o thruput limitations. an example is
upswing of relational databases.

the original relational/sql effort was done on vm370 platform
at research
http://www.garlic.com/~lynn/subtopic.html#systemr

there was some discussions that went on with the ims people in stl (as
well as consulting with them) vis-a-vis the difference between 60s dbms
and relational dbms. the stl view-point was that relational implicit
index doubled the physical disk space requirement and significantly
increased the disk i/os for accesses (to process the index structure)
... compared to the direct record pointers (part of data infrastructure)
then in common use. the relational position was that the implicit
indexes, abstracted away directly exposing record pointers
... significantly reducing the manual maintenance in the care and
feeding of dbms.

the big upswing in amount of system real storage during the 80s allowed
for relational index structure to be cached ... significantly reducing
the i/o penalty. disk storage also became significantly cheaper starting
to tip the balance between hardware/thruput costs (for relational)
vis-a-vis skills  manual costs (for 60s dbms technologies).

for other topic drift ... posts about getting to play disk engineer
in disk engineering (bldg 14) and product test lab (bldg 15)
http://www.garlic.com/~lynn/subtopic.html#disk

past posts in this thread:
http://www.garlic.com/~lynn/2008c.html#78 CPU time differences for the same job
http://www.garlic.com/~lynn/2008c.html#80 Random thoughts
http://www.garlic.com/~lynn/2008c.html#81 Random thoughts
http://www.garlic.com/~lynn/2008c.html#82 CPU time differences for the same job
http://www.garlic.com/~lynn/2008c.html#83 CPU time differences for the same job
http://www.garlic.com/~lynn/2008c.html#84 CPU time differences for the same job

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CPU time differences for the same job

2008-02-05 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Wertheim, Marty) writes:
 The discussion on high speed buffers and internal cache misses is
 right on target.  I've got a set of benchmarks that are plain vanilla
 COBOL programs - no DB2, or anything.  One of them steps through 8
 tables of 16 MB each (hopefully not your typical COBOL program).  On a
 2094-717, that program will run in 60 seconds of CPU time when the CEC
 is 50% busy, 180 seconds of CPU time when the CEC is 95% busy.  Other
 programs using less memory have more stable CPU times, but even with a
 program using 7MB, CPU times doubled when the CEC got up to 98%.  If
 anyone wants to send me an email off line, I can send you an Excel
 showing the numbers I've seen.

when caches first appeared ... they weren't large enuf to contain
multiple contexts ... always loosing on interrupts which resulted burst
of high cache misses changing context.

one of the things that I did relatively early for large cache 370s ...
was dynamically monitor interrupt rate ... and dynamically transition
from running applications enabled for i/o interrupts to running disabled
for i/o interrupts (when i/o interrupt rate was higher than some
threshhold). this was trade-off between timeliness of handling i/o
interrupts (responsiveness) and total thruput. above the i/o interrupt
rate threshhold ... it was actually possible to process i/o interrupts
faster when running disabled for i/o interrupts.

running disabled for i/o interrupts tended to significantly improve
application cache hit rate ... which translated into higher mip rate
(instructions executed per second) and application executing in less
elapsed time. at the same time, i/o interrupt handling tended to be
batched ... which tended to improve the cache hit rate of the
interrupt handlers ... making them run faster. it was possible to
experience both increased aggregate thruput and faster aggregate i/o
interrupt handling ... when running disabled for i/o interrupts ... and
taking i/o interrupts at specific periods. this was part of what i got
to ship in my resource manager
http://www.garlic.com/~lynn/subtopic.html#fairshare

also in the mid-70s, about the time future system got killed
http://www.garlic.com/~lynn/subtopic.html#futuresys

i was asked to do a smp project (which never shipped) which involved up
to five 370 processor configuration ... and extensive microcode
capability. as part of leveraging the microcode capability ... i put the
multiprocessor dispatching function into microcode ... basically
software managed the dispatch queue ... putting things on in specific
order ... the multiprocessor microcode would pull stuff off the dispatch
queue ... service it and at interrupt/interval ... move it to different
queue (this was something similar seen in later intel 432). I also
created a similar microcode managed queue for i/o operations
... something like what was later seen in 370-xa.
http://www.garlic.com/~lynn/subtopic.html#bounce

For standard 370 (two) multiprocessor cache machines ... i did two level
operations for cache affinity ... attempting to keep processors
executing on the same processor that they were previously executing on
(and therefor being able to reuse information already loaded into
cache). Traditional 370 (two-way) multiprocessor cache machines slowed
down processor cycle by ten percent to accomodate cross-cache chatter
between the two processors ... aka a 2-processor machine started out
nominal raw thruput as 1.8times a single processor. System software
multiprocessor overhead then would further reduce (application) thruput
... so there was typical rule-of-thumb that 2-processor had 1.3-1.5
times a single processor. 

I did some slight of hand in the system software for multiprocessor and
processor affinity ... and had one two-processor configuration where one
processor would chug along about 1.5times mip rate of uniprocessor
(because of improved cache hit rate ... despite the machine cycle being
ten percent slower) and the other processor abot .9times uniprocessor
mip rate. The effective aggregate application thruput was slightly over
twice uniprocessr ... because of some slight of hand in software
implementation and cache affinity.

The change-over to 3081 was supposed to only be a multiprocessor machine
... so there was never any need to mention the
uniprocessor/multiprocessor hardware thruput differences.

In the early 90s, machines started appearing where there were caches in
the 1mbyte-4mbyte range ... this was larger than the real-storage sizes
when i first started redoing virtual memory and page replacement
algoritms as an undergraduate in the 60s ... recent post
http://www.garlic.com/~lynn/2008c.html#65 No Glory for the PDP-15
other references
http://www.garlic.com/~lynn/subtopic.html#wsclock

Some of the major software vendors, like DBMS offerings ... were doing
the 

Re: Random thoughts ...

2008-02-05 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Tom Marchant) writes:
 I don't know what you mean when you say the cache line was split across 
 domains.  I forget whether a line was 32 bytes, but it always cantained the 
 data from a line of storage.  The cache was split into an instruction cache 
 and 
 a data cache, though.  There was significant logic needed to deal with that.

801/risc (harvard) architecture allowed for independent instruction
and data caches ... but didn't provide for hardware cache
consistency. this met that self-modifying instructions were out ...  and
for store-into data caches (which didn't automatically flush every
change to storage) ... there were special flush cache instruction for
use by loaders (i.e. program loaders have peculiar characteristic that
they can be treating instructions as data ... which then have to get out
to storage ... before the changes can possibly show up in the
instruction cache).

amdahl's machine at the time was somewhat viewed as remarkable being
able to provide i-cache and d-cache consistency.

a few old posting discussing 5880 
http://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode
http://www.garlic.com/~lynn/2006e.html#31 MCTS
http://www.garlic.com/~lynn/2006j.html#35 Code density and performance?
http://www.garlic.com/~lynn/2006u.html#27 Why so little parallelism?
http://www.garlic.com/~lynn/2006u.html#34 Assembler question
http://www.garlic.com/~lynn/2006v.html#20 Ranking of non-IBM mainframe builders?

including some old email from the period (which includes reference
to 5880 announcement with separate instruction and data cache):
http://www.garlic.com/~lynn/2006b.html#email810318

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Random thoughts ...

2008-02-05 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


some cache x-over discussion
http://www.garlic.com/~lynn/2008c.html#78 CPU time differences for the same job
http://www.garlic.com/~lynn/2008c.html#80 Random thoughts

in general cache size and cache implementation can affect effective mip
rate. there are some number of benchmark applications on the web that
attempts to profile and characterize those details.

variability concurrent activity, other application executing as well as
possible interrupt rates ... can also affect cache hit/miss ratios ...
and therefor effective mip thruput ... which, in turns affects measured
cpu use.

there is also capture ratio issues ... which can also affect measured
cpu use. recent post/thread on the subject:
http://www.garlic.com/~lynn/2008.html#42 Inaccurate CPU% reported by RMF and 
TMON

there are a lot of common/similar technology issues with dealing with
various kinds of caches (virtual memory, database caches, hardware
caches, etc).

for other topic drift regarding effective thruput with regard to
database caches in a loosely-coupled environment ... we looked
at this when we were doing scaleup ... old email
http://www.garlic.com/~lynn/lhwemail.html#medusa
and old post
http://www.garlic.com/~lynn/95.html#13

when we were doing ha/cmp
http://www.garlic.com/~lynn/subtopic.html#hacmp

a lot of dbms caches have been the equivalent of hardware store-thru
caches ... both the log and the main dbms being updated about the same
time. some of the dbms implementations did an optimization analogous to
store-into caches ... sometimes referred to as fast-commit; aka as
soon as the log entry was written, the transaction was treated as
committed, ... the associated datatbase records writes would be delayed
... being combined with large number of other record writes for
optimization of disk arm motion. Also subsequent changes to the same
record ... prior to their writes ... would effectively combine multiple
changes into smaller number of writes.

The problem was that these fast-commit strategies would be negated
when working in a loosely-coupled environment. If a transaction came in
on a (loosely-coupled) processor ... different from a processor which
had a changed record ... not yet written to disk ... before the
related locks could be obtained ... all such changed records would first
have to be written to their home dbms disk location.

As part of doing scale-up work for ha/cmp distributed lock manager
... also worked out being able to do any direct cache-to-cache copies
... piggy-backed with granting lock (in a store-into, fast commit, dbms
environment).

some of the processor cache implementations had implemented
cache-to-cache copies (on miss) ... say, some of the numa hardware (aka
we had participated in SCI meetings and consulted with some of the
vendors that did NUMA SCI implementations, included one later bought by
your favorite mainframe vendor) ... but they didn't have to worry about
dbms failure recovery. This can get really tricky attempting to
correctly merge logs and do transaction log redo ... in a
loosely-coupled environment with multiple independent logs. at the time,
we originally worked it out, most of the dbms implementations were still
rather dubious that it could work correctly under all possible failure
scenarios.

random past posts mentioning sci:
http://www.garlic.com/~lynn/96.html#8 Why Do Mainframes Exist ???
http://www.garlic.com/~lynn/96.html#25 SGI O2 and Origin system announcements
http://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
http://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
http://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
http://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan  supers query
http://www.garlic.com/~lynn/2001j.html#12 OT - Internet Explorer V6.0
http://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
http://www.garlic.com/~lynn/2001l.html#16 Disappointed
http://www.garlic.com/~lynn/2002g.html#10 Soul of a New Machine Computer?
http://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage 
Byte Magazines from 1983
http://www.garlic.com/~lynn/2002i.html#83 HONE
http://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
http://www.garlic.com/~lynn/2002l.html#52 Itanium2 performance data from SGI
http://www.garlic.com/~lynn/2003.html#0 Clustering ( was Re: Interconnect 
speeds )
http://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
http://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
http://www.garlic.com/~lynn/2003p.html#30 Not A Survey Question
http://www.garlic.com/~lynn/2004.html#1 Saturation Design Point
http://www.garlic.com/~lynn/2004d.html#6 Memory Affinity
http://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, 
dual-simplex, etc
http://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: 
CAS 

Re: CPU time differences for the same job

2008-02-05 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Tom Schmidt) writes:
 z/VM waits with a CPU loop (so it doesn't need to come out of a wait state 
 when it is waiting) so it would run just as hot when it was idle as 
 otherwise.  
 (Unless there is special code to account for machine perspiration?) 
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

arg, if that is true, ... i must have been gone, way too long. virtual
machine hypervisor was very careful about waiting ... both because of

1) supporting virtual/virtual ... i.e. it might actually be running in a
virtual machine ... so it really might be stealing cycles from other
applications

2) lot of work had been done in the 60s to make it cost efficient
... somewhat motivated by various customers using platform for commerial
timesharing service bureaus.

there was obvious work to make the system operate as efficiently as
possible ... aka dispatching, scheduling, paging, pathlengths, etc
... as well as making the processing accounting as accurate as possible.

however, there was additional features helping make the transition to
offering 7x24 availability of online environment. 

this started in the period when systems were normally leased and
processors had cpu meters ... and system lease charges were based on
value accumulated by the cpu meter. one of the tricks developed ... was
making sure that the cpu-meter stopped ... when the system was up and
available but otherwise idle. other work was enhancing offshift
operation  when useage might be light or non-existant ... allowing
operations w/o onsite operator; aka leave the system up  available for
offshift, remote logins ... but otherwise minimize as close to zero as
possible, cost of system operation.

it wasn't just necessary to put the machine into wait state to stop the
cpu meter ... but also quiesce all i/o ... but leave the system
available for accepting things like incoming keystrokes.

One of the idiosyncrasies of the cpu meter operation was that if it was
running and everything stopped ... the meter would continue to run for
400 milliseconds before it actually stopped (i.e. for the cpu meter to
actually stop, idle had to be for periods longer than 400 milliseconds).

trivia question ... what was the wakeup interval for the mvs srm?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: What happened to resumable instructions?

2008-02-03 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Paul Gilmartin) writes:
 o Page faults?  How are page faults handled for resumable
   instructions?  Is a fault generated for any page in the
   range of either operand, with the OS attempting to stage
   both, or does a fault possibly for each resumption.

the resumable instructions were storage-to-storage operations with
address and length specified in registers for both source and
destination. requirement was that the registers be appropriately updated
as the instruction proceeded ... so that on any interrupt, all registers
will have been appropriately updated (and therefor on restart, the
instruction would resume from the appropriate position).

the problem wouldn't so much the psw address being correct on the
interrupt ... the issue was that all registers actually reflecting the
correct values (i.e. working copies of the values weren't squirreled
away in other hardware location).

prior to mvcl/clcl ... instructions would pretest start and end location
for both (2k) storage protection and 4k page fault. with mvcl/clcl, the
testing had to be done incrementally for every byte processed (although
there were later optimizations that would do it in larger blocks, and
then fall back to per byte mode ... for any residual).

highly pipelined machines gets into lots of issues with what are the
current (visable) register contents at any point in time (lots of
different parallel executing circuits with possibly their own register
values). newer machines also have extensive hardware ras with status
checkpoint and instruction retry (to mask various kinds of transient
hardware glitches) ... instructions that execute incrementally aggrevate
status checkpointing overhead (and instruction retry logic).

i actually ran into early problem with 370/125 implementation on
vm370. at vm370 boot (on real machine), it would load maximum values
into mvcl (initialized to clear storage) and kick it off, it would zero
all of real storage ... interrupting when it hit the end of real
storage. early 370/125 microcode had a bug where it was still
pretesting origin  end for mvcl/clcl and aborting the instruction
before starting ... which to vm370 made it appear like there was no real
storage.

vm370 was originally targeted at supporting 256kbyte machines ... prior
to announce of 370/125 ... and was never announced for 370/125. at this
point, a customer had requested assistance in getting vm370 running on
370/125. while vm370 had non-swappable kernel support ... the amount of
fixed kernel had somewhat bloated between the original announce ... and
this point ... which also significantly aggrevated vm370 operation in
256k real storage.

recent post referring to having done lots of work in 60s as
undergraduate on cp67 ... much of which got picked up and shipped in the
product:
http://www.garlic.com/~lynn/2008c.html#65 No Glory for the PDP-15

one of the other things that i had done for cp67 in the 60s was making
portions of the kernel pageable ... which wasn't shipped in cp67 product
... but was picked up for vm370 product. however, since the initial
release ... things had gotten lax between what was in the fixed kernel
and what was in the pageable kernel.

by the time of the customer vm370 370/125 request to the science
center
http://www.garlic.com/~lynn/subtopic.html#545tech

I had moved much of the cp67 work (that had been dropped in the cp67 to
vm370 morph) as well as adding lots of new stuff ... and was supplying
highly modified vm370 systems to large number of internal
datacenters. some old email references:
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102

and part of getting vm370 up and running on 370/125 also involved going
thru lots of kernel infrastructure and significantly reducing the fixed
storage footprint for running in 256kbyte real storage machine.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Job ad for z/OS systems programmer trainee

2008-02-02 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Binyamin Dissen) writes:
 Well, back when I was starting SP'ing (Western Electric), slightly earlier
 than your time frame, that was the job.

 You first figured out the bug, and then reported it to the vendor. There
 wasn't as much hand holding as nowadays. You were expected to do most of the
 research yourself.

 Of course, nowadays with OCO and no fiche, this would be a bit harder.

some 40yrs ago, back when i was an undergraduate and doing a lot of
virtual machine kernel hacking ... i would get requests for new features
from the vendor. in later years, i conjectured (that because of the
nature of some of the requests) ... some may have originated from some
gov. agency.

for some slight topic drift:
http://www.nsa.gov/selinux/list-archive/0409/8362.cfm

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Data Erasure Products

2008-02-01 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.

[EMAIL PROTECTED] (Anne  Lynn Wheeler) writes:
 the issue normally reduces to what is the threat model? security
 classification tends to be associated with threat model where divulging
 the information is not desirable ... and classification level attempts
 to make the measures to prevent information divulging proportional to
 the damange that might happen if the information is divulged (and/or the
 effort that an attacker will go to in order to get the data). For
 magnetic media this might be something like overwritting a specific
 number of times with (different) random data ... nist standard:
 http://csrc.nist.gov/publications/nistpubs/800-88/NISTSP800-88_rev1.pdf

re:
http://www.garlic.com/~lynn/2008c.html#47 Data Erasure Products 

oh, and note recent article:

'Erased' personal data on agency tapes can be retrieved, company says
http://www.govexec.com/dailyfed/0108/012308j2.htm

from above:

Personal and sensitive government data -- including employees' personal
data -- on magnetic tapes that federal agencies erase and later sell can
be retrieved using simple technology, according to an investigation
conducted by a storage tape manufacturer.

... snip ...

the above article references a GAO report/study:

According to its September 2007 report (GAO-07-1233R), GAO concluded it
could not find any comprehensible data on any of the tapes using
standard commercially available equipment and data recovery techniques,
specialized diagnostic equipment, custom programming or forensic
analysis.

... snip ...

i.e. gao report
http://www.gao.gov/new.items/d071233r.pdf

old article from last sept:

Government sale of used magnetic tape storage not a big security risk, GAO 
reports
http://www.networkworld.com/community/node/19807

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Migration from Mainframe to othre platforms - the othe bell?

2008-02-01 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Steve Comstock) writes:
 WAIT! STOP! Is an AIX machine a mainframe??? I don't
 think so. I know the definition is a slippery one,
 but to me a mainframe is a Syztem z machine, or one
 of its predecessors (or a competitive machine from
 the past; maybe Unisys today).

there have been a couple different AIXs.

there is the risc flavor.

research and opd were working on a 801/risc (romp) followon to the
displaywriter. when that got killed, the group looked around for
something else to apply the machine for ... and settled on the unix
workstation market. they got the group that had done the port of att
unix for pc/ix to do a similar port. this morphed displaywriter followon
was released as pc/rt and the system as aix v2. it becamse aix v3
for rios/power chips on the rs/6000.
http://www.garlic.com/~lynn/subtopic.html#801

independent of that there was the port of UCLA's locus unix look-alike
to 370. it was sort of a unix SAA strategy ... with the same system
announced as aix/370 and aix/386. locus provided for transparent
networked filesystem as well as fairly transparent process migration
 i.e. executing code could move to different processors (even to
different kinds of processors ... for some value of transparent).
aix/370 was upgraded to aix/esa

wiki page:
http://en.wikipedia.org/wiki/IBM_AIX_(operating_system)

the above is slightly garbled. it mentions much of aix v2 kernel was
written in PL/I programming language. pl.8 language and cp.r operating
system had been developed for 801/risc and was being used for the
displaywriter followon. when that project was killed ... and the machine
retargeted for unix workstation ... sort of needed something to do for
all the pl.8 programmers. A VRM (virtual resource manager) was defined
implemented in pl.8 ... with an abstract virtual machine interface.
The vendor that had done the pc/ix port then was instructed to port to
the defined abstract interface. The claim was that this could be done
faster and with less total resources than having the vendor implement
directly to bare metal. This was shown to be incorrect ... and had other
implications ... since things like new device drivers ... required both
a VRM implementation as well as a AIX implementation. The VRM was
dropped as part of AIX V3 and move to RIOS/POWER chips.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Migration from Mainframe to othre platforms - the othe bell?

2008-02-01 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Scott Lurndal) writes:
 Locus TNC (Transparent Network Computing).  Morphed into OSF1/AD, IIRC.

 Unisys did a similar OS (SVR4/MK (mk for microkernel, based on 
 Chorus)) for the OPUS product 1989-1997.  Became part of the Amadeus
 project with USL and the EC (can't remember the name of the EC 
 initiative).

re:
http://www.garlic.com/~lynn/2008c.html#50 Migration from Mainframe to othre 
platforms - the othe bell?

the palo alto group had been working with UCLA on locus and had it
installed on series/1 and some 68k machines. they were also working on
bsd port for 370. their bsd/370 product was redirected to pc/rt and
offered as AOS on the (bare metal) PC/RT (counter example as to the
minimal effort it took to do a unix port to the native hardware as
opposed to the difficulty involved in dealing with the VRM as in the
AIXV2 offering). the palo alto group then produced the aix/370 and
aix/ps2 offerings.

the corporation equally funded mit project athena with dec ... and
each had an assistant director at the project ... which turned out
things like kerberos and X (windows, as well as some number of
other things) ... recent kerberos reference/post:
http://www.garlic.com/~lynn/2008c.html#31 Kerberized authorization service

the corporation also funded cmu andrew projects (to the tune equal to
the combined corporations' funding at mit) ... which turned out things
like andrew filesystem, andrew widgets, mach, and camelot.

OSF design meetings were something of a mashup of locus, project athena,
andrew, hp/apollo domain, and (at least) aix distributed filesystem.
wiki page
http://en.wikipedia.org/wiki/Open_Software_Foundation

from above:

OSF's standard Unix implementation was known as OSF/1 and was first
released in 1992.[2] For the most part, it was a failure; by the time
OSF stopped development of OSF/1 in 1994, the only vendor using OSF/1
was Digital, which rebranded it Digital UNIX (later known as Tru64 UNIX
after Digital's acquisition by Compaq).

... snip ...

during this period ... we were sometimes around the cambridge area,
but mostly busy with our ha/cmp product (for aix):
http://www.garlic.com/~lynn/subtopic.html#hacmp
and this old reference to parallel oracle meeting
http://www.garlic.com/~lynn/95.html#13
http://www.garlic.com/~lynn/96.html#15
and these old emails on ha/cmp scaleup
http://www.garlic.com/~lynn/lhwemail.html#medusa

cmu mach (microkernel) showed up in a number of implementations, including
NeXT and the current apple operating system ... recent post:
http://www.garlic.com/~lynn/2008b.html#20 folklore indeed
and wiki page:
http://en.wikipedia.org/wiki/Mach_kernel

above has reference to A comparison of Mach, Amoeba and Chorus
http://www.cdk3.net/oss/Ed2/Comparison.pdf

camelot (as encina) was included in the transarc spinoff (along with
andrew filesystem) and then directly purchased (joke in previous
reference about funding it three different times) ...  which has shown
up as an aix transaction cics.  wiki page ...
http://en.wikipedia.org/wiki/Transarc

osf was supposedly alternative to att/sun official unix.
camelot/encina was an alternative to att tuxedo transaction processing
(which was spun off and eventually showed up at BEA, recently purchased
by oracel). wiki page
http://en.wikipedia.org/wiki/Tuxedo_(software)

For a totally different mainframe amadeus drift ... in the late 80s, my
wife had done a short stint as chief architect of (this) amadeus (which
was being done sort of as a european alternative to the united and
american mainframe systems) that started off with the old eastern
airline res system
http://www.amadeus.com/
and wiki page
http://en.wikipedia.org/wiki/Amadeus_CRS

other mainframe unixes in the 80s was amdahl uts system and the special
product done for internal ATT use, which layered unix interface on top
of a stripped down tss/370 kernel.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Migration from Mainframe to othre platforms - the othe bel?

2008-02-01 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Henry Willard) writes:
 Other than having AIX in the name and being ports based on some version of
 Unix, AIX/ESA and AIX/370 didn't have much in common with the AIX that runs
 on System p.

re:
http://www.garilc.com/~lynn/2008c.html#50 Migration from Mainframe to othre 
platforms - the othe bell?
http://www.garilc.com/~lynn/2008c.html#53 Migration from Mainframe to othre 
platforms - the othe bell?

one of the issues about the various mainframe unix ports running under
vm370 ... was that the effort to add mainframe ras  erep to unix base
was significantly larger than doing the straight-forward port. running
the unix port in a virtual machine ... could leave the ras and erep
processing to the base virtual machine kernel. another small issue was
getting field engineering sign-off on machine/hardware service unless
there was adequate/approved ras and erep.

had this discussion about the vast difference in mainframe ras  erep
and various other systems, with other attendees at this old nasa
dependable computing conference:
http://www.hdcc.cs.cmu.edu/may01/index.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Young mainframers' group gains momentum

2008-01-31 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.


Young mainframers' group gains momentum
http://www.networkworld.com/news/2008/013108-young-mainframers-group-gains.html
Young mainframers' group gains momentum
http://www.computerworld.com/action/article.do?command=viewArticleBasicarticleId=9060499

from above:

ZNextGen, an organization aimed at young mainframe programmers, has
gained significant momentum since it was created roughly two years ago
through IBM and its user group, Share, according to its leaders.

... snip ...

SHARE zNextGen
http://www.znextgen.org/

from above:

zNextGen, a user-driven community for new and emerging System z
professionals that has the resources to help expediate your professional
development skills.

... snip ...

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Data Erasure Products

2008-01-31 Thread Anne Lynn Wheeler
[EMAIL PROTECTED] (Stephen Mednick) writes:
 it's not a case of how valuable the data is, more importantly it's to
 do with what the security classification is that has been assigned to
 the data. Depending on the data's security classification dictates the
 media overwriting/sanitisation method that is it be deployed in
 accordance with government requirements.

security classification is simplification ... like role-based access
qcontrol is simplification for permissions. ... recent post on dealing
with permissions
http://www.garlic.com/~lynn/2008b.html#26 folklore indeed

the issue normally reduces to what is the threat model? security
classification tends to be associated with threat model where divulging
the information is not desirable ... and classification level attempts
to make the measures to prevent information divulging proportional to
the damange that might happen if the information is divulged (and/or the
effort that an attacker will go to in order to get the data). For
magnetic media this might be something like overwritting a specific
number of times with (different) random data ... nist standard:
http://csrc.nist.gov/publications/nistpubs/800-88/NISTSP800-88_rev1.pdf

An example of how this gets simplified is example of consumer financial
information stored at a merchant. The damage gets translated into
security proportional to risk ... and the risk is what is the value of
the information to the merchant ... old post on the subject:
http://www.garlic.com/~lynn/2001h.html#61 Security Proportional To Risk

The problem is that the real threat model and therefor risk, is that the
value of the information to the consumer (and to any attacking crook) is
possibly one hundred times larger than the value of the information to
the merchant. The merchant is required to keep transaction logs (and the
associated account numbers) for some period as part of mandated business
processes.

The information value (to the merchant) is some part of the merchant's
profit margin on the transaction ... for hypothetical example for some
number of product transactions, this could be $10,000. The value of the
information to the crook, is related to the credit limits associated
with the individual accounts. This could conceivable be $10,000,000
(totally unrelated to the value of the information to the merchant,
i.e. some portion of the profit on the purchased products). Since the
value to the crook can be 100 to 1000 times larger, the attacking crooks
can afford to outspend the defending merchants by possibly one hundred
times.

in the mid-90s, the x9a10 financial standard working group had been
given the requirement to preserve the integrity of the financial
infrastructure for all retail payments. Part of this was looking in
detail at end-to-end vulnerabilities and threat models ...  as part of
coming up with x9.59 financial standard
http://www.garlic.com/~lynn/x959.html#x959

part of the x9.59 financial standard was eliminating the usefulness of
the account transaction log information (at merchants) to attacking
crooks  i.e. it didn't involve trying to prevent attacking crooks
from getting at the information ... it just made the information useless
to crooks for performing fraudulent financial transactions.

A different example was we also got involved in co-authoring the
financial industry x9.99 privacy standard. As part of that we had to
look at both GLBA and HIPAA (financial transactions can used for medical
procedures which may be listed).

One of the issues in HIPAA is that there is a real requirement to make
some amount of medical procedure information available. At a result,
HIPAA allows for information to be available if it can't be associated
with an individual (aka deidentified).

deidentified
 Under the HIPAA Privacy Rule, data are deidentified if either (1) an
 experienced expert determines that the risk that certain information
 could be used to identify an individual is 'very small' and documents
 and justifies the determination, or (2) the data do not include any of
 the following eighteen identifiers

... snip ...

As part of working on x9.99, we put together a privacy merged taxonomy
and glossary ... see:
http://www.garlic.com/~lynn/privacy.htm

for other details see notes at
http://www.garlic.com/~lynn/index.html#glosnote

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: New Opcodes

2008-01-30 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Tom Marchant) writes:
 There may have been speculation within IBM that Macrocode, and the 
 architecture that enabled it, was to make it easier to develop new features.  
 I 
 can tell you that I was at Amdahl at the time working on the 580.  That was 
 definitely a major reason for it.

re:
http://www.garlic.com/~lynn/2008c.html#29 New Opcodes
http://www.garlic.com/~lynn/2008c.html#32 New Opcodes
http://www.garlic.com/~lynn/2008c.html#33 New Opcodes
http://www.garlic.com/~lynn/2008c.html#35 New Opcodes

well, how should i have phrased it? ...

i would run into lots people ... including at the monthly SLAC meetings
... and frequently be asked for advice ... there was lots of issues
about not divulging confidences ... even confidences for companies i
didn't work for.

complicating things, i had a nearly complete set of individually serial
numbered (candy striped) 811 documents (i.e. architecture documents
named for their nov78 date).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: New Opcodes

2008-01-29 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Tom Marchant) writes:
 It also says, 894 instructions (668 implemented entirely in hardware)

 The latest POO lists about 750 instructions.  I know that there are a few not 
 listed in the POO.  Still, it sounds like it's a lot over 50.

as per past discussions re the architecture red book (i.e. cms script
file where command line option would print the full machine architecture
or just the POO subset, full machine architecture was distributed in red
3ring binders) and compareswap instruction
http://www.garlic.com/~lynn/subtopic.html#smp

getting an instruction added could require a lot of justification.

so one way of parsing of the reference to 50+ added instructions to
improve compiled code efficiency ... could be referring to over 50 of
the added instructions were justified for improving compiled code
efficiency (w/o saying anything at all about the total number of added
instructions and/or what was the justification for any of the other
added instructions).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: New Opcodes

2008-01-29 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

Anne  Lynn Wheeler [EMAIL PROTECTED] writes:
 as an aside ... there was some similar speculation two decades ago about
 such stuff. there was even some speculation that one of the other clone
 processor vendors creation of macrocode was to enable them to quickly
 adapt to such things (be more agile in tracking, implementing, deploying
 changes).

re:
http://www.garlic.com/~lynn/2008c.html#29 New Opcdoes
http://www.garlic.com/~lynn/2008c.html#32 New Opcdoes

actually such speculation dates back three decades to the introduction
of cross-memory instructions and dual-address space mode on 3033

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: New Opcodes

2008-01-29 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

Anne  Lynn Wheeler [EMAIL PROTECTED] writes:
 actually such speculation dates back three decades to the introduction
 of cross-memory instructions and dual-address space mode on 3033

re:
http://www.garlic.com/~lynn/2008c.html#29 New Opcdoes
http://www.garlic.com/~lynn/2008c.html#32 New Opcdoes
http://www.garlic.com/~lynn/2008c.html#33 New Opcdoes

part of the speculation was that the cross-memory/dual-address space
instructions used more STOs (segment table origins) simultaneously
... and the 3033 had inherited its TLB (and STO-associative)
implementation from 168.  The additional concurrent STO use activity was
putting pressure on TLB-miss and therefor performance.

one the other hand, large 168  3033 installation were facing enormous
pressure on amount of application addressable space ...

aka pasts posts about pointer passing paradigm from real memory heritage
dictated the SVS and subsequent MVS implementation with the kernel
appearing in the application address space. The MVS design included
moving (non-kernel) subsystems into their own address space
... dictating the common segment implementation (supporting squirreling
away data for pointer passing APIs). Larger installations were having to
constantly grow the common segment ... with 24bit addressing (16mbyte),
kernel taking up 8mbytes ... and the common segment growing from 4mbytes
to 5mbytes (and more) ... was only leave 3-4mbytes (or less) for
applications (even tho there was a virtual address space per
application).

the future system distraction had redirected a lot of effort
into non-370 activity
http://www.garlic.com/~lynn/subtopic.html#futuresys

when future system was killed, there was mad rush to get stuff back into
the 370 product pipeline. 370-xa was going to take 7-8 yrs (with 31-bit
addressing, access registers, program call  return, etc). the stop-gap
was 3033 ... which was 168 wiring/logic remapped to faster chip
technology.  The increasing machine capacity was adding more
applications, tending to grow the common segment and putting massive
pressure on available (virtual) memory for applications.

There was speculation that 3033 cross-memory and dual-address space
hardware changes was purely to create incompatibilities for the clone
processor vendors ... however there was more than enuf other
justification, even if the clone vendors hadn't existed at all
(intermediate step on the way to access registers) ... aka dual-address
space instructions allowed subsystem to reach directly into the calling
application's virtual address to direclty access values pointed to by
the passed pointers (w/o requiring the common segment hack).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: New Opcodes

2008-01-29 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


re:
http://www.garlic.com/~lynn/2008c.html#29 New Opcodes

justification is justification ... not all have to be there based on the
same justification.

as an aside ... there was some similar speculation two decades ago about
such stuff. there was even some speculation that one of the other clone
processor vendors creation of macrocode was to enable them to quickly
adapt to such things (be more agile in tracking, implementing, deploying
changes).

misc. past posts mentioning macrocode.
http://www.garlic.com/~lynn/2002p.html#44 Linux paging
http://www.garlic.com/~lynn/2002p.html#48 Linux paging
http://www.garlic.com/~lynn/2003.html#9 Mainframe System 
Programmer/Administrator market demand?
http://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
http://www.garlic.com/~lynn/2005d.html#59 Misuse of word microcode
http://www.garlic.com/~lynn/2005d.html#60 Misuse of word microcode
http://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned 
programming language
http://www.garlic.com/~lynn/2005p.html#14 Multicores
http://www.garlic.com/~lynn/2005p.html#29 Documentation for the New 
Instructions for the z9 Processor
http://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?
http://www.garlic.com/~lynn/2005u.html#43 POWER6 on zSeries?
http://www.garlic.com/~lynn/2005u.html#48 POWER6 on zSeries?
http://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode
http://www.garlic.com/~lynn/2006c.html#9 Mainframe Jobs Going Away
http://www.garlic.com/~lynn/2006j.html#32 Code density and performance?
http://www.garlic.com/~lynn/2006j.html#35 Code density and performance?
http://www.garlic.com/~lynn/2006m.html#39 Using different storage key's
http://www.garlic.com/~lynn/2006p.html#42 old hypervisor email
http://www.garlic.com/~lynn/2006u.html#33 Assembler question
http://www.garlic.com/~lynn/2006u.html#34 Assembler question
http://www.garlic.com/~lynn/2006v.html#20 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2007b.html#1 How many 36-bit Unix ports in the old 
days?
http://www.garlic.com/~lynn/2007d.html#3 Has anyone ever used self-modifying 
microcode? Would it even be useful?
http://www.garlic.com/~lynn/2007d.html#9 Has anyone ever used self-modifying 
microcode? Would it even be useful?
http://www.garlic.com/~lynn/2007j.html#84 VLIW pre-history
http://www.garlic.com/~lynn/2007k.html#74 Non-Standard Mainframe Language?
http://www.garlic.com/~lynn/2007n.html#96 some questions about System z PR/SM

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Job ad for z/OS systems programmer trainee

2008-01-28 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Ted MacNEIL) writes:
 My degree is a major in computer science with a minor in statistics.
 My first job was as a capacity analyst.
 My degree was 100% applicable.

a lot of capacity planning came out of a lot of performance work at the
science center 
http://www.garlic.com/~lynn/subtopic.html#545tech

... including modeling and workload profiling and the fundamentals for
capacity planning.

science center had done the port of apl\360 for cms\apl ... and rather
than the toy 16-32kbyte workspaces ... they could be as large as virtual
memory/machine size.

cms\apl became the basis for much of the sales/marketing support
applications on the world-wide hone system (sometime in the early 70s,
branch office couldn't even submit mainframe orders that hadn't first
been processed by hone application)
http://www.garlic.com/~lynn/subtopic.html#hone

one of the applications deployed on HONE was the performance predictor
(system analytically model from the science center implemented in apl
... allowing branch people to characterize the customer's configuration
and workload and then ask what-if questions regarding changes to
configuration and/or workload.

the production science center online system (first cp67 and then vm370)
was heavily instrumented and eventually had 7x24 activity data for
approaching two decades and established similar standard for other
internal systems.

besides the system analytical modeling (including the work that resulted
in the performance predictor application) there was also a number of
event-driven model implementations.

there were also a number of execution sampling implementations ... one
which resulted in VS/Repack product in the mid-70s ... which would take
trace of instruction address  storage references and do semi-automated
program reorganization for optimal paging operation. Before release as a
product, it was used extensively internally by several products making
transition from real-storage environment to virtual storage environment
(i.e. for instance, IMS made extensive use of the application).

another trace/sampling implementation was also used to determine what
functions went into VM ECPS (i.e. 6k bytes of vm370 kernel instructions
that represented approx. 70percent of kernel pathlength execution was
moved to microcode).
http://www.garlic.com/~lynn/94.html#21

another performance optimization methodology used at the science center
was multiple regression analysis ... with the detailed 7x24 system
monitoring ... it was possible to determine where the system was
spending large percentage of its time.

i've mentioned before using this methodology on large 450k line cobol
application that had been heavily studied and optimized over period of a
couple of decades (using products like strobe). it ran on 40+ max
configured mainframe CECs ($1.2b-$1.5b aggregate). i used multiple
regression analysis to identify another 14percent performance
improvement (that hadn't been turned up using the other methodologies).
misc. recent references:
http://www.garlic.com/~lynn/2006o.html#23 Strobe equivalents
http://www.garlic.com/~lynn/2006s.html#24 Curiousity: CPU % for COBOL program
http://www.garlic.com/~lynn/2006u.html#50 Where can you get a Minor in 
Mainframe?
http://www.garlic.com/~lynn/2007l.html#20 John W. Backus, 82, Fortran 
developer, dies
http://www.garlic.com/~lynn/2007u.html#21 Distributed Computing

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How does ATTACH pass address of ECB to child?

2008-01-22 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Peter Relson) writes:
 You can be certain that POST will always support the CS quick-post protocol
 and the LOCAL LOCK.

re:
http://www.garlic.com/~lynn/2008b.html#50 How does ATTACH pass address of ECB 
to child?

as stated in the above post ... the principles of operation wording is
from over 35yrs ago ... charlie had invented the compareswap
instruction at the science center while doing fine-grain lock for cp67
... and the pok favorite son operating system people had rejected it
... i.e. the testset was more than adequate for the global kernel
spin-lock multiprocessor support from 360 ... mention of global
spin-lock
http://www.garlic.com/~lynn/2008b.html#58 How does ATTACH pass address of ECB 
to child?

the challenge given the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

was come up with example uses for compareswap instruction ... other than
(multiprocessor) locking operation ... some discussion
http://www.garlic.com/~lynn/2008b.html#58 How does ATTACH pass address of ECB 
to child?

i.e. the compareswap instruction can be used for atomic updates for
whole class of operations w/o requiring locking operation (which would
typically also require supervisor call into the kernel to have
uninterruptable operation ... as a means of simulating atomic update).
as mentioned in the above analogy with dbms financial transaction and
optimistic operation ... that avoids lock ... if certain original
conditions continue to hold when the transaction changes are actually
made permanent.

so the wording in the principles of operation was from the stand-point
that the pok favorite son operating system people still had to
understand the implications of how compareswap instruction could be
used ... and then actually have the changes permeate their
implementation.

my familiarity with cp67 dates back to when 3 people from the science
center
http://www.garlic.com/~lynn/subtopic.html#545tech
came out to install cp67 at the univ the last week of jan68 (40yrs ago
this week)

one might claim that maybe that its time to update that 35+yr old
perspective in the principles of operation compareswap writeup.

A.6 Multiprogramming and Multiprocessing Examples
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6?SHELF=DZ9ZBK03DT=20040504121320

other posts in this thread:
http://www.garlic.com/~lynn/2008b.html#31 How does ATTACH pass address of ECB 
to child?
http://www.garlic.com/~lynn/2008b.html#47 How does ATTACH pass address of ECB 
to child?
http://www.garlic.com/~lynn/2008b.html#48 How does ATTACH pass address of ECB 
to child?
http://www.garlic.com/~lynn/2008b.html#51 How does ATTACH pass address of ECB 
to child?
http://www.garlic.com/~lynn/2008b.html#60 How does ATTACH pass address of ECB 
to child?

for some topic drift ... related to dbms operation ... misc. posts about
the original relational/sql project
http://www.garlic.com/~lynn/subtopic.html#systemr

and transcriptions of '95 SQL reunion
http://www.mcjones.org/System_R/SQL_Reunion_95/index.html

there is mention of compareswap in the discussion about use for
locking
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-Shoot-ou.html#Index311
in this session:
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-Shoot-ou.html

however, the speculation about the origin of compareswap instruction
was wrong (i.e. compareswap was chosen because CAS are charlie's
initials).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How does ATTACH pass address of ECB to child?

2008-01-22 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (David Logan) writes:
 Sorry for the slightly off-topic question, but how come this persons posts
 are always so complicated to work through with all of the pieces and URL
 links? Is it the function of how they are posting (i.e. perhaps online with
 certain options), or is this a manual effort?

 I'm hoping to not spark a long debate. I'm hoping it's just a one or the
 other answer.

re:
http://www.garlic.com/~lynn/2008b.html#63 How does ATTACH pass address of ECB 
to child?

some of it overlaps 
http://www.garlic.com/~lynn/2008b.html#27 Re-hosting IMB-MAIN

and some of it explained in these recent postings
http://www.garlic.com/~lynn/2008b.html#57
http://www.garlic.com/~lynn/2008b.html#64

starting in the late 70s, i had been doing semi-automated discussion groups and
mailing lists on the internal network
http://www.garlic.com/~lynn/subtopic.html#internalnet

which was larger than the arpanet/internet from just about the beginning
until approx the summer of '85.

then somebody packaged up about 300pgs of hardcopy of the discussions,
put them in 3ring tandem binders and sent them to everybody on the
executive committee (ceo, pres, senior vps). there was also a article in
datamation. i got blaimed for all of it.

the result was a whole lot of corporate churn and investigations into
this new phenonoma. part of results were internal tools deployed to
officially support electronic online discussion. major tool could
operate both in usenet kind of mode as well in mailing list kind of mode
simultaneously (end user could select the option).

as noted this was on the internal network and distinct from both
bitnet/earn

http://www.garlic.com/~lynn/subnetwork.html#bitnet

... although the subsequent listserv on bitnet had some similarities the
internal tool.

for other topic drift ... old email from the person setting up
earn (in europe) looking for computer conferencing tools:
http://www.garlic.com/~lynn/2001h.html#email840320

as previously referenced listserv history not started until '86
in europe/earn:
http://www.lsoft.com/corporate/history_listserv.asp

i have this old joke about in the early 70s, going over to paris to set
up a HONE clone (i.e. internal online computing system) ... misc. past posts
mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone

as part of the corporate EMEA hdqtrs move from ny to paris ...  and
having some difficulty reading my email back in the states.

one of the other outcomes was that there was a researcher paid to study
how i communicate ... they sat in the back of my office for nine months,
went to me to meetings, took notes on face-to-face, phone, and
electronic communication. they also had softcopy of all my incoming and
outgoing email as well as logs of all instant messages. the resulting
research was also used as a stanford phd thesis (joint with computer ai
and language) as well as subsequent papers and books. misc. past
posts related to computer mediated conversation
http://www.garlic.com/~lynn/subnetwork.htm#cmc

lots of past posts over the last dozen years or so discussing
compareswap instruction and/or multiprocessor implementations
http://www.garlic.com/~lynn/subtopic.html#smp

for slightly related drift as to references, URLs and posting technollgy
... again go back to early days at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

not only did the compareswap instruction originate at the science
center, but also virtual machine systems, a lot of interactive computing
facilities as well as the internal network technology.  one of the other
things that were invented at the science center was gml (letters chosen
for the first letter of the last names of the three people ... although
most people are more familiar with generalized markup language) in 1969.
gml has subsequently morphed into sgml, html, xml, etc.

reference to work at cern morphing sgml into html
http://infomesh.net/html/history/early

and reference to first webserver outside europe on the slac
vm system (cern sister location)
http://www.slac.stanford.edu/history/earlyweb/history.shtml

for another reference there was this article in ibm systems mag. 3 years
ago  although some of the details are slightly garbled:
http://www.ibmsystemsmag.com/mainframe/marchapril05/stoprun/10020p1.aspx

there is also old joke at share circa 1974 ... when cern presented a
paper about the results of a cms/tso bakeoff. the company couldn't
restrict customer copies of the report ... but internal copies were
marked confidential/restricted ... i.e. available on a need-to-know only
(wouldn't want to contaminate employees with detailed TSO vis-a-vis CMS
comparison).

in any case, i've been doing online for almost as long as i've been
using cp67 ... 40yrs this week when three people came out the last week
jan68 from the science center to install cp67 

Re: How does ATTACH pass address of ECB to child?

2008-01-22 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


Anne  Lynn Wheeler [EMAIL PROTECTED] writes:
 for another reference there was this article in ibm systems mag. 3 years
 ago  although some of the details are slightly garbled:
 http://www.ibmsystemsmag.com/mainframe/marchapril05/stoprun/10020p1.aspx

re:
http://www.garlic.com/~lynn/2008b.html#63 How does ATTACH pass address of ECB 
to child?
http://www.garlic.com/~lynn/2008b.html#65 How does ATTACH pass address of ECB 
to child?

i.e. the ibm systems mag article talks about my postings and the
archived (URL) references

the paper copy of the mag even included a picture of me sitting at home
at a keyboard.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How does ATTACH pass address of ECB to child?

2008-01-22 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Farley, Peter x23353) writes:
 Granted, but the converse is also true: A posix-style semaphore or
 queuing mechanism is way overkill for the simplest cases, which IMHO
 most non-software-house application (not systems) programmers are most
 likely to need or encounter.

 Typical in my non-software-house application experience is a single
 master task with N children, each child with different responsibilities,
 and each child's only communication need is with the parent, not with
 any sibling.  In this type of application, ECB/POST/WAIT is more than
 adequate and no requests are ever lost using FastPOST or FastWAIT.

 Software houses and systems programmers tend to deal with far more
 complex designs where multiple requestors and/or multiple servers are
 involved.  For those applications a semaphore or queuing mechanisms of
 some sort are certainly the correct solution.  BTDTGTTSTPI.

 IOW, suit the tool to the job you need to accomplish.  ECB/POST/WAIT is
 a perfectly proportioned mechanism for even quite sophisticated
 application programming.  It's only when you must step up to the next
 level of multiple requestor/server designs that ECB/POST/WAIT becomes
 insufficient to your application's needs.

all of the large DBMS vendors have gone to some sort of internal task
management ... and are using compareswap ... or similar instruction
from other hardware vendors (that would offer similar semantics,
although the 370 compareswap is the granddaddy).

one of the problems that rs/6000 and aix ran into getting all the major
vendors to port to the RS/6000 and aix ... was that it failed to have an
instruction offering compareswap atomic semantics ... forcing (by
comparison) significant performance degradation (using kernel calls). It
was possibly initially anticipated that RS/6000 didn't require a
compareswap instruction because the rios/power chip didn't offer a
multiprocessor option.

however, one of the first AIX enhancements for the major DBMS vendors
was an emulated compareswap instruction ... which translated into an
SVC call with an extremely short instruction emulation fastpath in the
supervisor call first level interrupt handler ... with immediate return
to application mode.

rios/power was always only a single processor but needed to run disabled
for all interrupts to provide the emulated atomic compareswap semantics
... and do it with minimal pathlength overhead (thus the special case
emulation and return done totally in the svc interrupt handler).

there is significant amount of commonality in design across the dbms
industry on how to leverage compareswap semantics to implement
multithreaded/multitasking operation.

lots of past posts mentioning multiprocessor designs and implementations
and/or compareswap instruction
http://www.garlic.com/~lynn/subtopic.html#smp

things get more complex going to cluster (loosely-coupled) operation. my
wife had been con'ed into going to POK to be in charge of
loosely-coupled architecture. while there, she came up with peer-coupled
shared data architecture ... which didn't see a lot of takeup until
sysplex (which contributed to her not staying long in the position)
... except for the people doing ims hotstandby ... misc.  past posts
http://www.garlic.com/~lynn/subtopic.html#shareddata

later we did the ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

with a distributed lock manager that managed serialization function 
transaction semantics across a clustered, loosely-coupled environment.

we had extended the work in ha/cmp and distributed lock manager for
large-scale scaleup ... mentioned in this old post
http://www.garlic.com/~lynn/95.html#13
and some of the scaleup issues discussed in these old emails
http://www.garlic.com/~lynn/lhwemail.html#medusa

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Wheeler Postings

2008-01-22 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Jon Brock) writes:
 Here, for instance:
 http://groups.google.com/group/bit.listserv.ibm-main/msg/7941aee482af5b48?

i.e. also archived here
http://www.garlic.com/~lynn/2006q.html#25 garlic.com

and some related recent references, also about posting
http://www.garlic.com/~lynn/2008b.html#65 How does ATTACH pass address of ECB 
to child?
http://www.garlic.com/~lynn/2008b.html#66 How does ATTACH pass address of ECB 
to child?

now, a post on the subject in the original thread:
http://www.garlic.com/~lynn/2008b.html#63 How does ATTACH pass address of ECB 
to child?
http://www.garlic.com/~lynn/2008b.html#69 How does ATTACH pass address of ECB 
to child?

mentions work on original relational/sql implementation, including
technology transfer to endicott for sql/ds:
http://www.garlic.com/~lynn/subtopic.html#systemr

also mentioned was scaleup for our cluster/distributed high availability 
product:
http://www.garlic.com/~lynn/subtopic.html#hacmp

one of the people in this (ha/cmp) meeting, previously mentioned
http://www.garlic.com/~lynn/95.html#13

says that he handled much of the technology transfer from endicott
(sql/ds) back to STL for DB2.

now two of the other people in that same meeting show up a little later
at a small client/server startup responsible for something called the
commerce server. we were called in to consult because they wanted to do
payment transactions on their server ... and also wanted to use a
technology that had been invented called SSL for the implementation:
http://www.garlic.com/~lynn/subnetwork.html#gateway

that work is now frequently referred to as electronic commerce.

40yrs since science center installed cp67 at the univ. the last week of
jan68.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How does ATTACH pass address of ECB to child?

2008-01-21 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Paul Gilmartin) writes:
 I have believed, and other updates to this thread appear to concur,
 that WAIT/POST are older than CS.  At some time, then, WAIT/POST
 code must have used some other locking mechanism.  So, after CS
 first became available there may have been some interval before it
 was reliable to use CS to bypass POST.  When did that interim
 conclude, making it safe to bypass POST in that fashion?  And might
 there be open-source archival MVS (3.8 or earlier) that could
 execute on emulated hardware supporting CS, but for which POST
 requires the actual SVC?

re:
http://www.garlic.com/~lynn/2008b.html#31 How does ATTACH pass address of ECB 
to child?
http://www.garlic.com/~lynn/2008b.html#47 How does ATTACH pass address of ECB 
to child?
http://www.garlic.com/~lynn/2008b.html#48 How does ATTACH pass address of ECB 
to child?
http://www.garlic.com/~lynn/2008b.html#50 How does ATTACH pass address of ECB 
to child?
http://www.garlic.com/~lynn/2008b.html#51 How does ATTACH pass address of ECB 
to child?

wait/post was svc into the kernel running disabled for interrupts
... and therefor would be serialized from the standpoint of other
events.

the state of the multiprocessing art when charlie was working on
fine-grain multiprocessor locking for cp67 (and invented compareswap
instruction) ... was single supervisor/kernel spin-lock ... i.e. a
single kernel variable that all first level interrupts handlers would
perform a testset on initial entry.

if the testset was succesful, execution would continue thru the first
level interrupt handler and into the rest of the kernel. if testset was
unsuccesful, the code would continue to branch back to the testset
instruction until succesful. on leaving the kernel (for
application/problem execution), the global kernel spin-lock would be
cleared to zeros (allowing other processors to execute in the kernel).

implicit was that all wait/post operations were correctly serialized
... along with effectively all other kernel functions ... by combination
of the kernel running disabled for interrupts ... and, on real
multiprocessor, testset serialization on the global kernel spin-lock.

as noted, with other organizations believing that testset was adequate
for all multiprocessor functions ... the challenge given the science
center
http://www.garlic.com/~lynn/subtopic.html#545tech

was to come up with compareswap uses that weren't multiprocessor
specific. the result was the stuff that appears in the appendex
of the principles of opertion ... previous reference
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/A.6?DT=20040504121320

where it was possible to implement correct, concurrent shared operations
w/o requiring the overhead of kernel/supervisor calls (which achieved
correctly serialized operation, in part by running disabled for
asynchronous interrupts).

other past posts mentioning various multiprocessor and/or compareswap
interrupt
http://www.garlic.com/~lynn/subtopic.html#smp

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How does ATTACH pass address of ECB to child?

2008-01-21 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (McKown, John) writes:
 Atomic is used quite a bit in computer science. Like the original
 Greek, it means indivisible. That is, when an atomic operation occurs,
 any other process will either see the data in the original form, or in
 the updated form. But it will never seen an intermediate form. It is
 especially used in relational databases. This is usually done so that,
 for instance, if you transfer money from checking to savings, you never
 see the amount in both places. You either see the amount in checking or
 in savings. This despite the fact that a lot of stuff is going on to do
 the move.

 compare and swap simply means that. It compares field1 against field2.
 If field1 equals field2, then the contents of field3 is placed in
 field2. What is atomic is that there is a lock that occurs so that
 at the instant before field1 is compared against field2, no other
 process is allowed to modify or even look at field2 until after the
 instruction finishes. This ensures that once the operation starts that
 no other process can make any updates or decisions based on the contents
 of field2.

there are ACID properties with regard to transactions ... frequently
things like financial.

A .. atomicity
C .. consistency
I .. isolation
D .. durability

a wiki ref
http://en.wikipedia.org/wiki/ACID

simpler example is two concurrent transactions, one a debit and one a
credit ... attempting to update the *same* value.

say you have $5000 in savings account. there is a transaction that
fetches the current value ... in order to subtract a $200 ATM debit.  At
the same time there is another transaction that is attempting to perform
(add) an EFT $1000 deposit.

W/o the transaction serialization semantics, the EFT $1000 deposit
concurrently fetches the current ($5000) value, adds $1000 and starts a
store operation of the $6000 value ... concurrently while the ATM debit
is performing the store of the $4800 value.

correct transaction atomic serialization, should result in correct $5800
when everything is done. w/o transaction atomic serialization, the $6000
store might be done followed by the $4800 store ... or the 4800
store might be done followed by the $6000 store (resulting in either
$4800 or $6000, neither correct)

for the compareswap scenario. The $4800 store only completes if the
value being replaced is $5000 ... otherwise things start all over
... similarly, the $6000 store only completes if the value being
replaced is $5000 ... otherwise things starts all over again.

in transaction systems, this is sometimes referred to as optimistic
... as opposed to purely serialized locking systems ... which only allow
one single transaction at a time to fetch a value ... and no other
transaction is allowed to fetch a value (or otherwise proceed) until the
active transaction has completed ... this is somewhat analogous to the
kernel spin-lock mentioned in previous description:
http://www.garlic.com/~lynn/2008b.html#58 How does ATTACH pass address of ECB 
to child?

the transaction properties for consistently updating two different
values ... is somewhat more complicated ... than simple compareswap.
Doing a transfer from one account to another requires that two values
(not just one) be correctly updated as a single, syncronized, serialized
operation (w/o allowing other simultaneous transactions to corrupt the
values).

in a few, carefully controlled situation it can be done by
compare-double  swap. a savings account value and the debit account
value are stored in contiguous storage locations. For a transfer
operation, both values are fetched, and the appropriate changes (a
subtraction from one and an addition to the other) are made to both
values.  then compare-double and swap is executed ... only succeeding
with the replace of both values IFF (if  only if) neither value has
changed since the original fetch operation. If either or both values
have changed since the initial fetch (because other operations are
concurrently attempting to change the same fields) ... then the store
operations fail ... and the process starts all over again.

the other alternative ... for concurrent updates of multiple different
values is to resort to something like the spin-lock scenario ... only
allowing one processing to performing operations at a single time.  As
pointed out ... to avoid getting into trouble ... the active running
transaction can't be interrupted while it is performing the locked
operation.

compareswap semantics tends to provide much higher level of concurrent
operations with much lower overhead for providing correct serialized
operation.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at 

Re: How does ATTACH pass address of ECB to child?

2008-01-20 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Paul Gilmartin) writes:
 This is the Principles of Operation; it is not the Assembler Services
 manual.  Note carefully the provided that clause.  It's important.
 Is there any explicit guarantee in the Assembler Services manual that
 the supervisor WAIT and POST routines use COMPARE AND SWAP to
 manipulate [ECBs], and that the ECBs are updated only after all
 collateral operations are completed?  If not, the programmer who
 engages in such shenanigans is using unsupported interfaces.

re:
http://www.garlic.com/~lynn/2008b.html#31 How does ATTACH pass address of ECP 
to child?

you have to remember that this was being written over 35yrs ago
... after the favorite son operating system people in pok claimed that
the compareswap instruction wasn't necessary (for multiprocessor
operation); that testset (carried forward from 360 multiprocessor) was
more than satisfactory.
http://www.garlic.com/~lynn/subtopic.html#smp

the challenge then to the science center 
http://www.garlic.com/~lynn/subtopic.html#545tech

... in order to get compareswap justification for inclusion in 370
architecture ... was to come up with other uses ... that weren't
multiprocessing specific. the result was some number of examples of its
use in multiprogramming/multithreaded operation (that wasn't necessarily
multiprocessing).

in any case, since then there have been several generations, allowing
compareswap instruction use to widely permeate through the
infrastructure.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How does ATTACH pass address of ECB to child?

2008-01-20 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (David Logan) writes:
 Test and set works just fine too. They are both atomic.

re:
http://www.garlic.com/~lynn/2008b.html#31 How does ATTACH pass address of ECB 
to child?
http://www.garlic.com/~lynn/2008b.html#47 How does ATTACH pass address of ECB 
to child?

the issue with atomic testset was that it was a purely binary value
... locked or not locked. charlie was working on fine-grain cp67
multiprocessing locking at the science center (virtual machine operating
system running on 360/67)
http://www.garlic.com/~lynn/subtopic.html#545tech

when he invented compareswap (as previously noted, CAS was chosen
because it is charlie's initials).
http://www.garlic.com/~lynn/subtopic.html#smp

compareswap ... allowed for full word binary value ... as opposed to
simple set/notset (or locked/notlocked) ... and compare-doubleswap
allowed for double word value atomic update.

in the multithreaded/multiprogramming value ... compareswap allowed for
atomic updating of a wide variety of values.

the testset scenario allowed for locking a section of code,
non-atomic update of value within the locked code section ... and then
unlocking the code section. independent executing paths would arrive at
the locked code section and spin until the other executable path had
released the lock. the problem in the multiprogramming/multithreaded
sequence ... is that the executable thread could be interrupted while
doing the non-atomic update (but still holding the lock) ... with
execution then passing to another thread which went into unending
spin-loop (waiting for the suspended thread to release the lock).

as noted in the examples added to the principles of operation was
examples of atomic updates of more complex values (than simple
set/not-set) ... which could be used even in interruptable code.  prior
to compareswap ... such multiprogramming/multithreaded operation always
required the overhead of supervisor call (for performing non-atomic
updates in non-interruptable code ... and avoiding possible application
spin loops).

from long ago and far away

A.6 Multiprogramming and Multiprocessing Examples
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6?SHELF=DZ9ZBK03DT=20040504121320

for other topic drift ... it has now been 40yrs since three people from
the science center came out and installed cp67 at the university.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How does ATTACH pass address of ECB to child?

2008-01-20 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (David Logan) writes:
 While all of that is interesting, and useful in its own right, none of that
 matters when a you want to perform a simple atomic test and set of a the
 posted bit in an ECB. Both instructions are atomic for this purpose.

 You're not going to have any threading problems when using either
 instruction to check/set the bit in an ECB. The only reason to stay away
 from TS when posting is if you also want to update the completion code.

 And the atomic nature only matters when setting the posted bit. For the task
 preparing to wait, since it's only a check of the posted bit, you perform a
 TM and then call WAIT if the posted bit hasn't been set by the time of the
 TM.

the original post
http://www.garlic.com/~lynn/2008b.html#31 How does ATTACH pass address of ECB 
to child

was about quote from the principles of operation with regard to the
example of post routine bypass:

A.6.3.1 Bypass Post Routine
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.3.1?SHELF=DZ9ZBK03DT=20040504121320CASE=

from above:

The following routine allows the SVC POST as used in MVS/ESA to be
bypassed whenever the corresponding WAIT has not yet been executed,
provided that the supervisor WAIT and POST routines use COMPARE AND SWAP
to manipulate event control blocks (ECBs).

... snip ... 

somebody then noted the caveate about supervisor WAIT  POST routines
using COMPARE AND SWAP ... and I replied that the original writeup was
over 35 yrs ago ... back to when the pok favorite son operating system
assumed that testset was more than adequate for all purposes.
http://www.garlic.com/~lynn/2008b.html#47 How does ATTACH pass address of ECB 
to child

(atomic) testset
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/7.5.139?SHELF=DZ9ZBK03DT=20040504121320

from above:

The leftmost bit (bit position 0) of the byte located at the
second-operand address is used to set the condition code, and then the
byte is set to all ones.

... snip ... 

has always been defined to test a single bit and if it is zero ... sets
the whole byte to ones (xff) as well as the corresponding condition
code.

so the post routine bypass needs to do atomic replace of the whole word
(or simulated atomic replace by executing in disabled kernel with code
function locking).

for topic drift, quick search engine turns up this reference to ECB
field, wait, post (and other os simulation services):
http://www.agorics.com/Library/KeyKos/Gnosis/190.html

from above:

SVC 2 - POST

POST is fully supported. Bit 0 of the ECB is set to 0, bit 1 is set
to 1, and bits 8-31 are set to the specified completion code.

... snip ...

recent post referencing gnosis and keykos
http://www.garlic.com/~lynn/2008b.html#24 folklore indeed

but then a little more searching turns up this reference

Synchronizing Tasks (WAIT, POST, and EVENTS Macros)
http://publib.boulder.ibm.com/infocenter/zos/v1r9/topic/com.ibm.zos.r9.ieaa600/tasks.htm

from above:

Figure 42. Event Control Block (ECB)

0   1   2
+---+---+---+
| W | P |   completion code |
+---+---+---+

When an ECB is originally created, bits 0 (wait bit) and 1 (post bit)
must be set to zero. If an ECB is reused, bits 0 and 1 must be set to
zero before a WAIT, EVENTS ECB= or POST macro can be specified. If,
however, the bits are set to zero before the ECB has been posted, any
task waiting for that ECB to be posted will remain in the wait
state. When a WAIT macro is issued, bit 0 of the associated ECB is set
to 1. When a POST macro is issued, bit 1 of the associated ECB is set to
1 and bit 0 is set to 0. For an EVENTS type ECB, POST also puts the
completed ECB address in the EVENTS table.

... snip ... 

i.e. 

initially both bits zero and one are zero. 

wait specifies that bit zero is set to one. 

post specifies that bit 0 is set to zero, bit one is set to one, and the
rest of the word is filled in with complettion code.

testset checks bit zero for being set to zero and then sets the whole
byte to one.

... so discussing the post routine bypass scenario (from the orignal
post) ... of a multithreaded operation performing a post operation
... when the WAIT hasn't yet been executed (i.e. both bits zero and
one are still zero), an atomic replace occurs setting bit 0 to zero, bit
1 to one and the rest of the word to the completion code (as long both
bits zero and one are still zero).

this is further dependent on the wait routine only doing an atomic
setting of bit zero to one ... as long as both bits zero and one are
still zero. unfortunately, testset only tests bit zero for zero
... before setting the whole byte to one. that means that atomic
testset doesn't correctly perform the wait operation if the ECB has
already been 

Re: How does ATTACH pass address of ECB to child?

2008-01-20 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (David Logan) writes:
 While all of that is interesting, and useful in its own right, none of that
 matters when a you want to perform a simple atomic test and set of a the
 posted bit in an ECB. Both instructions are atomic for this purpose.

 You're not going to have any threading problems when using either
 instruction to check/set the bit in an ECB. The only reason to stay away
 from TS when posting is if you also want to update the completion code.

 And the atomic nature only matters when setting the posted bit. For the task
 preparing to wait, since it's only a check of the posted bit, you perform a
 TM and then call WAIT if the posted bit hasn't been set by the time of the
 TM.

re:
http://www.garlic.com/~lynn/2008b.html#50 How does ATTACH pass address of ECB 
to child?
http://www.garlic.com/~lynn/2008b.html#50 How does ATTACH pass address of ECB 
to child?

aka ... the ECB is defined as testing both bits zero and one ... where
atomic testset instruction only tests bit zero.

wait semantics defines that it tests that ECB isn't already being waited
on (bit zero set to zero) and also tests that the ECB isn't already
posted (bit one set to zero).

using testset instruction will correctly handle an ECB that isn't
already being waited on ... but will incorrectly handle an ECB that has
already been posted (since testset instruction *ONLY* tests bit zero
before replacing the whole byte with ones).

the original operating system convention would always call the
supervisor ... so it was non-interruptable when the ECB field was being
updated ... and used testset in multiprocessing configuration to
serialize execution on the different processors.

moving to multiprogramming/multithreaded code w/o code serialization
locks and code enabled for interrupts ... required atomic updates of the
field ... based on update semantics following the rules for both bits 0
(wait) and one (post) ... not just a single bit ... aka wait processing
honors ECB that is already being waited on ... bit 0, and/or has already
been posted ... bit 1. testset can only recognize an ECB that is
already being waited on ... but won't recognize an ECB that has already
been posted ... before it obliterates all the bits in the byte setting
them all to one.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How does ATTACH pass address of ECB to child?

2008-01-18 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Lindy Mayfield) writes:
 What doesn't is that Cannatello's book has a page and a half on doing
 POST, with one example of how to change the ECB without using the POST
 macro.

 He even has the child checking the ECB to see if a WAIT had been issued.

A.6.3.1 Bypass Post Routine
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.3.1?SHELF=DZ9ZBK03DT=20040504121320CASE=

from above:

The following routine allows the SVC POST as used in MVS/ESA to be
bypassed whenever the corresponding WAIT has not yet been executed,
provided that the supervisor WAIT and POST routines use COMPARE AND SWAP
to manipulate event control blocks (ECBs).

... snip ...

i.e. charlie had been working on fine grain multiprocessor locking for
cp67 at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

when he invented the compareswap instruction
http://www.garlic.com/~lynn/subtopic.html#smp

note compareswap was chosen because CAS are charlie's initials.

trying to get the instruction into the 370 architecture was initially
rebuffed since the pok favorite son operating system claimed that
testset ... carried forward from 360 multiprocessor days, was all that
was necessary. the statement was made that in order to get compareswap
into 370 architecture required coming up with uses that weren't
multiprocessor specific. came up with the multitasking/multithreaded
examples ... which were included in the compareswap programming
examples.

A.6 Multiprogramming and Multiprocessing Examples
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6?SHELF=DZ9ZBK03DT=20040504121320

more recently the perform locked operation instruction was defined
... and added to the above description.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Flash memory arrays

2008-01-18 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-mai,alt.folkore.computers as well.

[EMAIL PROTECTED] (, IBM Mainframe Discussion List) writes:
 Around mid-1999 EMC signed a multi-$billion contract to buy a HUGE number  of 
 little disks from IBM over a period of several years.  So IBM was making  
 disks then.  And I think that contract expired about half a decade  ago.

san jose plant site (disk unit) now belongs to hitachi. most recent,
hitachi talking about selling off the unit.
http://www.garlic.com/~lynn/2007v.html#33 Hitachi, Silver Lake in talks about 
hard drives, sources say

a few other references
http://www.garlic.com/~lynn/2003d.html#9 IBM says AMD dead in 5yrs ... -- 
Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003i.html#25 TGV in the USA?
http://www.garlic.com/~lynn/2003n.html#39 DASD history
http://www.garlic.com/~lynn/2006.html#21 IBM up for grabs?
http://www.garlic.com/~lynn/2006o.html#18 RAMAC 305(?)
http://www.garlic.com/~lynn/2006r.html#14 50th Anniversary of invention of disk 
drives
http://www.garlic.com/~lynn/2006r.html#15 50th Anniversary of invention of disk 
drives
http://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk 
drives

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Re-hosting IMB-MAIN

2008-01-18 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Thomas Kern) writes:
 You could save some money by running SLES10  and the linux version of
 LSoft's LISTSERV product. 

 No z/OS or z/VM necessary.

recent posts mentioning listserv history 
http://www.garlic.com/~lynn/2008.html#75 Rotary phones
http://www.garlic.com/~lynn/2008.html#76 Rotary phones

on (vm-based) bitnet/earn:
http://www.garlic.com/~lynn/subnetwork.html#bitnet

including the following reference:

1991

The international BITNET network reached its peak, connecting some 1,400
organizations in 49 countries for the electronic, non-commercial
exchange of information in support of research and education. Thanks
largely to the volunteer efforts of Eric Thomas, BITNET provided
thousands of electronic mailing lists based on LISTSERV.

Eric Thomas did not want his software to disappear with the
mainframes. Therefore, he started looking for ways to port LISTSERV to
other environments, such as VMS and Unix.

... snip ... 

from this site:
http://www.lsoft.com/corporate/history_listserv.asp

predating listserv on bitnet was the internal corporate vm-based online
conferencing facility that had options that would run in LISTSERV-like
mode as well as a USENET-like mode.

and predating all of them was the online computer conference that
Tymshare provided to share starting aug76 ... archives:
http://vm.marist.edu/~vmshare/

on their vm-based commercial timesharing platform
http://www.garlic.com/~lynn/subtopic.html#timeshare

some old email with vmshare references
http://www.garlic.com/~lynn/lhwemail.html#vmshare

one of my hobbies was providing custom, highly modified vm systems to
internal locations ... including the HONE infrastructure
http://www.garlic.com/~lynn/subtopic.html#hone

which had sort of started out after the 23jun69 unbundling announcement
to provide operating system hands-on experience for people in branch
offices (running in virtual machines ... starting out with number of
deployed cp67 systems). HONE also evolved some number of cms/apl based
sales  marketing applications which came to dominant all HONE activity.
Eventually HONE clones were deployed all over the world ... and it
wasn't even possible to submit a customer order that hadn't first been
processed by a HONE application.

part of what i was doing with vmshare ... was setting up a process where
I replicate all the vmshare files (from tymshare) on the various HONE
systems.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Flash memory arrays

2008-01-18 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (, IBM Mainframe Discussion List) writes:
 Around mid-1999 EMC signed a multi-$billion contract to buy a HUGE number  of 
 little disks from IBM over a period of several years.  So IBM was making  
 disks then.  And I think that contract expired about half a decade  ago.

san jose plant site (disk unit) now belongs to hitachi. most recent,
hitachi talking about selling off the unit.
http://www.garlic.com/~lynn/2007v.html#33 Hitachi, Silver Lake in talks about 
hard drives, sources say

a few other references
http://www.garlic.com/~lynn/2003d.html#9 IBM says AMD dead in 5yrs ... -- 
Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003i.html#25 TGV in the USA?
http://www.garlic.com/~lynn/2003n.html#39 DASD history
http://www.garlic.com/~lynn/2006.html#21 IBM up for grabs?
http://www.garlic.com/~lynn/2006o.html#18 RAMAC 305(?)
http://www.garlic.com/~lynn/2006r.html#14 50th Anniversary of invention of disk 
drives
http://www.garlic.com/~lynn/2006r.html#15 50th Anniversary of invention of disk 
drives
http://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk 
drives

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Flash memory arrays

2008-01-17 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Clark Morris) writes:
 Au contraire.  My USB key is FBA formatted in 512 byte sectors (I
 think it is one of the FAT formats available to Win 98 or earlier).
 FBA is oriented to both disk and even more so, solid state.  There are
 a number of limitations in CKD that will be painful to eliminate and
 even if they are we are still left with a KLUDGE for which the phase
 out should have started 25 years ago.

re:
http://www.garlic.com/~lynn/2008b.html#15 Flash memory arrays

i offered well over 25 yrs ago (i.e. 3370 fba). The response i got back
from the data management group was that (at the time) it would still
cost $26m for training, education, documentation, etc ... even if i
provided fully integrated and tested implementation. the claim was that
i wouldn't be able to show the necessary ROI for the $26m since
customers would just buy the equivalent in fba that they would have ben
spent on ckd (no incremental revenue ... and therefor no ROI). the
issues about life-cycle costs with regard to maintaining ckd (and
life-cycle savings converting to fba) were discounted.

other posts mentioning ckd, fba, multi-track search, etc.
http://www.garlic.com/~lynn/subtopic.html#dasd

one of the first costs was trying to get some of the eckd kludge to
work for various things ... like speed-matching buffer (aka 3880
supporting attaching 3380 3mbyte datastreaming, to 168/3033 1.5mbyte
channels). a couple recent posts on the subject:
http://www.garlic.com/~lynn/2007e.html#40 FBA rant
http://www.garlic.com/~lynn/2007o.html#54 mainframe performance, was Is a RISC 
chip more expensive?

including this old email reference mentioning problems getting eckd for
speed-matching buffer working
http://www.garlic.com/~lynn/2007e.html#email820907b

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Flash memory arrays

2008-01-17 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Clark Morris) writes:
 There obviously would have to be a co-existence period where both
 architectures are supported.  VSAM is already FBA as are all of the
 newer data architectures.  The challenges will be spool, providing GDG
 like capability to the VSAM ESDS, moving PDSE read access to the
 Nucleus and deciding how to provide the current SYS1.NUCLEUS
 capability.  Maybe the MVS people should humble themselves and talk to
 the VM and VSE people to find out how they solved the problem.

re:
http://www.garlic.com/~lynn/2008b.html#15 Flash memory arrays
http://www.garlic.com/~lynn/2008b.html#16 Flash memory arrays
http://www.garlic.com/~lynn/2008b.html#17 Flash memory arrays

next week is 40yrs since i started on virtual machines ... i.e. three
people had come out from the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

to install (virtual machine) cp67 at the university (i had already been
doing optimization work on os/360 for a couple yrs). 

From the original implementation in the mid-60s, both cp67 and cms had
been logical fba ... even when using ckd dasd (which effectively hasn't
changed .. and subsequently made it trivial to support real fba (3310 
3370) devices.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Flash memory arrays

2008-01-17 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (McKown, John) writes:
 Only z/OS is stuck on ECKD formatted DASD. z/VM and z/VSE can both run
 on FBA. z/LINUX can run on FBA and/or on SAN/SCSI DASD. I think that the
 latest z/VM can also run on SAN/SCSI connected DASD as well. z/OS
 remains the hold out. For whatever reason that may be. 

there have enormous problems with ckd (and/or trying to get eckd kludge
to compensate for the problems). 

part of it is configuration support ... i.e. device geometry
configuration issues are essentially non-existant in platforms
supporting fba ... especially vis-a-vis all the stuff that is
periodically seen here just on various 3390 model  associated geometry
problems.

another have been speed-matching ... mentioned in previous post
http://www.garlic.com/~lynn/2008b.html#16 Flash memory arrays

and/or latency issues.

in the same time-frame i originally offered FBA support ... i had also
done a channel-extender project for the IMS group in STL. STL was
bursting at the seams ... and they needed to move 300 from the IMS group
to remote off-site location. The problem was that they deemed that the
remote 3270 CMS interactive response was unacceptable compared to what
they were getting from local 3270 CMS within the STL bldg. The solution
was to get CMS local 3270 terminals (for the IMS group) at the remote
site (with local CMS 3270 response) back to the vm370 machines in the
stl bldg. This was accomplished with channel extender (from network
systems corporation) running over T1 (1.5mbit) link.

An unexpected side-effect of this effort ... was not only did the IMS
group continue to get local 3270 CMS interactive response ... but the
channel extender actually improved overall system thruput and
performance. The issue was that these were 168/3033 16 channel systems
... where the 3270 control units and disk controllers were spread out
over common channel pool. The problem was that the 3270 control units
had extremely high channel busy time for the operations they were
performing ... which was interferring with disk thruput activity. The
local 3270 control units were moved to remote site and replaced on local
channel interface with the channel extended boxes ... which had
significantly lower channel busy overhead. The resulting reduced channel
busy overhead getting 3270 control units off local channels, improved
overall system performance by 10-15percent.

Basically, I could pretty trivially support almost any kind of direct
channel controller at the remote site ... except for count-key-dasd ...
even tho the associated speed-matching mismatch for the channel
extender was much larger than the factor of two times that later was
being dealt with trying to attach 3mbyte 3380s on 370 1.5mbyte channels
... aka channel-extender local 1mbyte devices running over 1.5mbit T1
connection ... nearly a factor of ten speed-match difference compared to
the factor of two speed-match difference for 3880 speed-match
implementation. ... again, past posts mentioning fba, ckd, etc
http://www.garlic.com/~lynn/subtopic.html#dasd

getting local 3270 cms terminal thruput for the IMS group was one
of the early efforts in the hsdt effort
http://www.garlic.com/~lynn/subnetwork.html#hsdt

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Flash memory arrays

2008-01-16 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Ed Gould) writes:
 You have hit it the head. We had a unit that emulated a 2305 (don't
 remember which model) it worked well . except when we had a power
 failure. Then at power up *EVERYTHING* was gone. We had to analyze it
 and put a vtoc back and then redefine PLPA and then IPL again to get
 the system to use it. I think my hair started to go gray because of
 the blasted machine. It was gone within a month.

 Oh, yes we had an application that *NEEDED* (well at least they
 thought they did) the 2305. In the month that it was going out, we
 got the applications people to use  VIO. They were extremely happy
 and that was the end of the beast. If the machine survives through a
 power blink then I would reconsider it but it would take a lot to do
 so.

there were STC solid state ... and for internal datacenters there was
something referred to as a 1655 (several hundred) from a vendor that
was using memory chips that had failed normal acceptance tests ... but
could still be used in this manner. they were most commingly used as
paging devices and so not surviving power failure wasn't an issue.

they were surplanted by 3090 expanded storage (and later really large
real memory) and disk controller electronic caches (initially 3880-11
and 3880-13).

old email discussing 2305, 1655, and stc electronic disk comparison:
http://www.garlic.com/~lynn/2007e.html#email820805

old posts mentioning 1655, 3880-11, and/or 3880-13:
http://www.garlic.com/~lynn/2000d.html#13 4341 was Is a VAX a mainframe?
http://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems.  Disk 
history...people forget
http://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001d.html#68 I/O contention
http://www.garlic.com/~lynn/2001l.html#53 mainframe question
http://www.garlic.com/~lynn/2001l.html#54 mainframe question
http://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
http://www.garlic.com/~lynn/2002.html#31 index searching
http://www.garlic.com/~lynn/2002d.html#55 Storage Virtualization
http://www.garlic.com/~lynn/2002i.html#17 AS/400 and MVS - clarification please
http://www.garlic.com/~lynn/2002l.html#40 Do any architectures use instruction 
count instead of timer
http://www.garlic.com/~lynn/2002o.html#3 PLX
http://www.garlic.com/~lynn/2002o.html#52 ``Detrimental'' Disk Allocation
http://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: 
Yamhill
http://www.garlic.com/~lynn/2003b.html#15 Disk drives as commodities. Was Re: 
Yamhill
http://www.garlic.com/~lynn/2003b.html#17 Disk drives as commodities. Was Re: 
Yamhill
http://www.garlic.com/~lynn/2003c.html#55 HASP assembly: What the heck is an 
MVT ABEND 422?
http://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
http://www.garlic.com/~lynn/2003m.html#39 S/360 undocumented instructions?
http://www.garlic.com/~lynn/2004d.html#73 DASD Architecture of the future
http://www.garlic.com/~lynn/2004e.html#3 Expanded Storage
http://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small 
clusters
http://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small 
clusters
http://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small 
clusters
http://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small 
clusters
http://www.garlic.com/~lynn/2004l.html#29 FW: Looking for Disk Calc program/Exec
http://www.garlic.com/~lynn/2005e.html#5 He Who Thought He Knew Something About 
DASD
http://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005m.html#30 Massive i/o
http://www.garlic.com/~lynn/2005r.html#51 winscape?
http://www.garlic.com/~lynn/2005t.html#50 non ECC
http://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
http://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
http://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006c.html#46 Hercules 3.04 announcement
http://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
http://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
http://www.garlic.com/~lynn/2006i.html#41 virtual memory
http://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor
http://www.garlic.com/~lynn/2006j.html#14 virtual memory
http://www.garlic.com/~lynn/2006k.html#57 virtual memory
http://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006s.html#30 Why magnetic drums was/are worse than 
disks ?
http://www.garlic.com/~lynn/2006s.html#32 Why magnetic drums was/are worse than 
disks ?
http://www.garlic.com/~lynn/2006v.html#31 MB to Cyl Conversion
http://www.garlic.com/~lynn/2006y.html#35 The Future of CPUs: What's After 
Multi-Core?
http://www.garlic.com/~lynn/2007c.html#0 old discussion of disk controller 
chache

Re: Computer Science Education: Where Are the Software Engineers of Tomorrow?

2008-01-15 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.

[EMAIL PROTECTED] (Michael Stack) writes:

 This appeared yesterday:
 http://www.informationweek.com/news/showArticle.jhtml?articleID=205601557

 Nearly 70% of middle school teachers lack education and certification
 in mathematics, let alone computer and business skills, the National
 Center for Education finds.

this and other aspects/posts in similar thread in a.f.c
http://www.garlic.com/~lynn/2008.html#44 Computer Science Education: Where Are 
the Software Engineers of Tomorrow?
http://www.garlic.com/~lynn/2008.html#46 Computer Science Education: Where Are 
the Software Engineers of Tomorrow?
http://www.garlic.com/~lynn/2008.html#56 Computer Science Education: Where Are 
the Software Engineers of Tomorrow?
http://www.garlic.com/~lynn/2008.html#57 Computer Science Education: Where Are 
the Software Engineers of  Tomorrow?
http://www.garlic.com/~lynn/2008.html#68 Computer Science Education: Where Are 
the Software Engineers of Tomorrow?
http://www.garlic.com/~lynn/2008.html#73 Computer Science Education: Where Are 
the Software Engineers of  Tomorrow?
http://www.garlic.com/~lynn/2008.html#87 Computer Science Education: Where Are 
the Software Engineers of  Tomorrow?
http://www.garlic.com/~lynn/2008.html#90 Computer Science Education: Where Are 
the Software Engineers of  Tomorrow?
http://www.garlic.com/~lynn/2008b.html#1 Computer Science Education: Where Are 
the Software Engineers of  Tomorrow?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Computer Science Education: Where Are the Software Engineers of Tomorrow?

2008-01-15 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.


re:
http://www.garlic.com/~lynn/2008b.html#2 Computer Science Education: Where Are 
the Software Engineers of Tomorrow?

and related new post ... also in a.f.c ... about newly published
report announced by nsf today
http://www.garlic.com/~lynn/2008b.html#6 Science and Engineering Indicators 2008

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: File Transfer conundrum

2008-01-11 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.

[EMAIL PROTECTED] (Gary Green) writes:
 I lost track of who posted the original inquiry, so take this for what it's
 worth.

 If the requirement is in the financial industry, could the communications
 between the two/various systems use S.W.I.F.T. (Society for Worldwide
 Interbank Financial Telecommunications)?  It's been some time since I wrote
 anything for SWIFT, but it was extremely secure and most financial
 institutions should be linked in.

 When I did some work, it was used in the securities market, primarily for
 payments, foreign exchange, securities, etc...  However, there were
 rumblings that the SWIFT organization was thinking about opening up the
 network for other financial transactions; which I took to mean data
 exchange...

home page
http://www.swift.com/

swift-2 providing internet capability and opening up for b-to-b;

we had been brought in to consult with small client/server startup that
wanted to do payment transactions on their server; they also had this
technology they called SSL they wanted to use ... and result is
sometimes now called e-commerce. part of the effort was something called
payment gateway (transition between internet and acquiring networks)
http://www.garlic.com/~lynn/subnetwork.html#gateway

we then were involved in x9a10 financial standard working group (in the
mid-90s had been given the requirement to preserve the integrity of the
financial infrastructure for all retail payments) that resulted
in the x9.59 financial standard
http://www.garlic.com/~lynn/x959.html#x959

some years ago we were also asked to provide some input to the swift-2
(what it was called at the time) specification.

Connecting to the secure IP network (SIPN)
http://www.swift.com/index.cfm?item_id=2304

from above:

SWIFTNet messaging services are provided via SWIFT's secure IP network
(SIPN), a highly secure and reliable network. Full redundancy, advanced
recovery mechanisms and first class operations and customer support
services ensure continuous network availability for SWIFTNet services.

... snip ..

SWIFTNet Interfaces Qualification
http://www.swift.com/index.cfm?item_id=2451

other refs:

Securities Markets Infrastructures
http://www.swift.com/index.cfm?item_id=2437

Banking Markets infrastructures
http://www.swift.com/index.cfm?item_id=57981

from above:

Additionally, SWIFT is now complementing its position in the wholesale,
high value clearing market by extending its portfolio of SWIFTNet
messaging solutions to the low-value payments and ACH market.

... snip ...

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Computer Science Education: Where Are the Software Engineers of Tomorrow?

2008-01-11 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Patrick O'Keefe) writes:
 Since the other thread on this topic went off in a seriously OT 
 direction I'll comment on this thread.

 That original article seemed to imply that the problem was language-
 based.  I've been out of touch with the educational system(s) far
 too long to have a really know what is currently taught and how it
 is taught.   I have trouble believing that switching from Java to 
 C, C++, LISP, and Ada is going to fix the problem.  (Is Ada common
 in CS curriculum?  I notice the authors work at an Ada development
 shop.  They may be a bit biased.)

 I think their comment that Java encourages a pick a tool that works
 mentality may be right on, though.  

 Rick Fochtman's question about Radix Partition Tress would make a 
 good test for CS students.  I picture a blank stare on the student's
 faces, but I would love to be shown wrong.

re:
http://www.garlic.com/~lynn/2008.html#64 Radix Partition Trees

there is independent thread in a.f.c regarding the same article, some of
the posts:
http://www.garlic.com/~lynn/2008.html#44 Computer Science Education: Where Are 
the Software Engineers of Tomorrow?
http://www.garlic.com/~lynn/2008.html#46 Computer Science Education: Where Are 
the Software Engineers of Tomorrow?
http://www.garlic.com/~lynn/2008.html#56 Computer Science Education: Where Are 
the Software Engineers of Tomorrow?
http://www.garlic.com/~lynn/2008.html#57 Computer Science Education: Where Are 
the Software Engineers of  Tomorrow?
http://www.garlic.com/~lynn/2008.html#62 competitiveness

part of it related to reduced math requirements and part of it related
to java (including some mention of java early days).

as noted in the radix partition trees thread ... some of luther's work
showed up in mainframe instructions.

i had been involved in the original relational/sql implementation,
system/r
http://www.garlic.com/~lynn/subtopic.html#systemr

and technology transfer to endicott for sql/ds. for other topic drift
... one of the people in the meeting referenced here ... had mentioned
that they had done much of the work for the technology transfer back
from endicott to stl for db2
http://www.garlic.com/~lynn/95.html#13

about the same time as the original relational/sql work, I also was
involved doing some stuff with a similar, but different kind of dbms
implementation (joint project between some people in stl and the los
gatos vlsi group). this had some of the similar objectives as the
relational/sql activity ... but significantly relaxed the requirements
for structured data definition ... and used radix partition trees for
its indexing structure (and the person involved in the two mainframe
instructions was brought in to consult on some of the work).

there was some differences between the old-style '60s DBMS contingent in
STL and the relational/sql contingent ... with the '60 DBMS contingent
pointing out that relational/sql typically doubled the physical disk
requirements (for the table indexes) and also greatly increased the
physical disk i/os (for processing the indexes). the relational/sql
contingent countered that the use of indexes was part of eliminating the
direct record pointer paradigm (that were characteristic of the '60
DBMS) as well as all the associated administrative overhead.

during the 80s, things started to tip towards relational/sql ...  with
disk cost/byte significantly reduced and significant increases in system
real storages (allowing index caching, eliminating many of the
additional index disk physical i/os)  aka change in hardware cost
tradeoff versis administrative/skill overhead.

for other drift, a totally independent implementation i use for
maintaining the rfc index information
http://www.garlic.com/~lynn/rfcietff.htm

as well as the merged glossary/taxonomy information
http://www.garlic.com/~lynn/index.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Radix Partition Trees

2008-01-11 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.

[EMAIL PROTECTED] (W. Kevin Kelley) writes:
 I see that I screwed up and I owe Luther an apology. It should 
 read ...tightest assembly language programs.. There were few problems with 
 Luther's program (other than figuring out how they worked!).

re:
http://www.garlic.com/~lynn/2008.html#65 Radix Partition Trees


a few old posts mentioning luther:
http://www.garlic.com/~lynn/98.html#19 S/360 operating systems geneaology
http://www.garlic.com/~lynn/98.html#20 Reviving the OS/360 thread (Questions 
about OS/360)
http://www.garlic.com/~lynn/2001.html#2 A new Remember when? period happening 
right now
http://www.garlic.com/~lynn/2001d.html#28 Very CISC Instuctions (Was: why the 
machine word size ...)
http://www.garlic.com/~lynn/2001h.html#73 Most complex instructions
http://www.garlic.com/~lynn/2002.html#14 index searching
http://www.garlic.com/~lynn/2002d.html#18 Mainframers: Take back the light 
(spotlight, that is)
http://www.garlic.com/~lynn/2002q.html#10 radix sort
http://www.garlic.com/~lynn/2003e.html#80 Super-Cheap Supercomputing
http://www.garlic.com/~lynn/2003i.html#58 assembler performance superiority: a 
given
http://www.garlic.com/~lynn/2003i.html#83 A Dark Day
http://www.garlic.com/~lynn/2004l.html#10 Complex Instructions
http://www.garlic.com/~lynn/2005c.html#35 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005c.html#38 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005e.html#37 Where should the type information be?
http://www.garlic.com/~lynn/2007l.html#57 How would a relational operating 
system look like?
http://www.garlic.com/~lynn/2007o.html#55 mainframe performance, was Is a RISC 
chip more expensive?
http://www.garlic.com/~lynn/2007u.html#18 Folklore references to CP67 at 
Lincoln Labs
http://www.garlic.com/~lynn/2008.html#68 Computer Science Education: Where Are 
the Software Engineers of Tomorrow?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Radix Partition Trees

2008-01-10 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.

[EMAIL PROTECTED] (Rick Fochtman) writes:

 Has anyone every seen any doc on using radix partition trees? I'm
 thinking it may have been one of the rainbow books.

 I vaguely remember data tree structures and I've got a table search
 problem that might be the perfect application for a tree-structured
 data repository. The table might have up to 1,000,000 entries, all in
 storage, and a balanced n-ary tree has GOT to be faster than using a
 binary search. The nature of the data is such that a plain
 old-fashioned list, in sorted order, isn't real amenable to a binary
 search, either.

for the fun of it look at:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.7?SHELF=DZ9ZBK03DT=20040504121320

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Really stupid question about z/OS HTTP server

2008-01-09 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.

[EMAIL PROTECTED] (McKown, John) writes:
 But on the off chance that I'm wrong, I will ask anyway. We use
 Windows as our desktop OS blech. One nice thing about it is that
 when we go to a restricted internal IIS web site, we are automagically
 logged on to the web site via the Active Directory trust mechanism
 (as I vaguely understand it). Is there any way to extend this so that
 when a user goes to our z/OS HTTP web server, they can be
 automagically logged on to their corresponding z/OS RACF id? We do use
 RACF on z/OS. We don't have any money for this, so a product (unless
 it is 100% free-as-in-beer and 100% supported) is out of the
 question. Yes, this is really a whine from the Windows people again
 about how unfriendly z/OS is. I wonder if they whine about our Linux
 and Solaris servers as well?

can you say kerberos? ... 

some windows references:
http://technet2.microsoft.com/windowsserver/en/library/b748fb3f-dbf0-4b01-9b22-be14a8b4ae101033.mspx
http://www.microsoft.com/windowsserver2003/technologies/security/kerberos/default.mspx

some ibm references
http://www.redbooks.ibm.com/abstracts/sg246540.html?Open
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/topic/com.ibm.db29.doc.admin/db2z_establishkerberosthruracf.htm
http://www-03.ibm.com/servers/eserver/zseries/zos/racf/pdf/share_03_2001_racf_kerberos_windows.pdf
http://www-03.ibm.com/servers/eserver/zseries/zos/racf/kmigrate.html

and then there is stuff like:

IBM CICS RACF Security and Microsoft Windows Server 2003 Security 
http://technet.microsoft.com/en-us/library/bb463146.aspx

kerberos was originally developed a MIT's Project Athena ...and then
became internet standard (GSS) ... and has been adopted by quite a
few infrastructures for authentication interoperability

... from my rfc index
http://www.garlic.com/~lynn/rfcietff.htm

select Term (term-RFC#) in the RFCs listed by section,
and then select GSS in Acryonym fastpath ... i.e.

generic security service  (GSS)
 see also network services , security
 5021 4768 4757 4752 4559 4557 4556 4537 4462 4430 4402 4401 4178 4121
 4120 3962 3961 3645 3244 3129 2942 2853 2744 2743 2712 2623 2479 2478
 2203 2078 2025 1964 1961 1510 1509 1508 1411

...

selecting RFC number brings up the corresponding summary in the lower
frame ... i.e.

5021 PS
 Extended Kerberos Version 5 Key Distribution Center (KDC) Exchanges
 over TCP, Josefsson S., 2007/08/17 (7pp) (.txt=13431) (Updates 4120)
 (Refs 4120) (was draft-ietf-krb-wg-tcp-expansion-02.txt)

...

and selecting the .txt=nnn filed (in rfc summary) retrieves the
actual RFC.

misc. past posts mentioning kerberos and/or pk-init
http://www.garlic.com/~lynn/subpubkey.html#kerberos

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Really stupid question about z/OS HTTP server

2008-01-09 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.

[EMAIL PROTECTED] (Chase, John) writes:
 I *think* you could do that using digital certificates, but I've only
 read that part of the RACF doc once and have not tried it (yet).

re:
http://www.garlic.com/~lynn/2008.html#53 Really stupid question about z/OS HTTP 
server

base infrastructure for all of this has been Kerberos. It was originally
developed at MIT's project athena ... which as equally funded by DEC and
IBM ... and so we got to go by project athena for periodic project
revues.

originally kerberos was purely password (aka shared-secret)
authentication. however, passwords can be evesdropped and reused
... being shared-secret, the same value is used for both originating
authentication and validating authentication ... which leads
to lots of vulnerabilities and operational problems (including
what happens when humans have to deal with scores or hundreds
of unique passwords)
http://www.garlic.com/~lynn/subintegrity.html#secrets

public keys and digital signatures were originally proposed as
addressing some of the short-comings of shared-secret infrastructures.
first, there is different value for generating authentication
information and validating authentication. this can address enormously
growing problems with having to manage large number of unique passwords
(security 101 typically requires unique passwords for unique security
domains as countermeasure to cross-domain attacks ... which is no longer
necessary in public key environment).

the original draft of pk-init for kerberos ... simply used public keys
and digital signatures ... in lieu of passwords for authentication.
http://www.garlic.com/~lynn/subpubkey.html#kerberos

in purely certificate-less environment
http://www.garlic.com/~lynn/subpubkey.html#certless

however, a variety of public key operation has evolved with include
something called digital certificates ... and digital certificate mode
of operation was eventually also added to the kerberos pk-init draft.

digital certificates were developed to address the scenario involving
first time interaction between complete strangers (aka the letters of
credit/introduction from the sailing ship days ... when the relying
party had no other means of obtaining information in first time
interaction with complete strangers). The purpose of the digital
certificates is to carry certified information regarding total
strangers that can't be obtained any other way.

the issue in all the major institutional authentication scenarios is
that digital certificates are redundant and superfluous ... especially
in employer/employee scenario ... since it is rarely the case that an
employer is rarely dealing with an employee as a total stranger.  in a
real digital certificate scenario use for (kerberos) authentication, a
total stranger ... that is not otherwise known and/or for which there is
absolutely no prior information ... is allowed authorized access to the
system ... aka nominally the purpose of the digital certificate paradigm
is to carry the information about what the person is allowed to do
... and there is no requirement to have any predefined (system)
information regarding the individual (and/or what they are allowed or
not allowed to do)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Inaccurate CPU% reported by RMF and TMON

2008-01-08 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.


[EMAIL PROTECTED] (Jason To) writes:
 We have encountered some weird problem last week and discovered that
 the total MVS CPU busy percentage reported by both RMF and TMON were
 inaccurate. RMF and TMON reported MVS CPU percentage does not match
 with the total CPU% usage by the jobs running in the system at least
 in one LPAR, the other LPAR seems to be fine. For example the reported
 total CPU% was 72% at an interval period but only 40% when we add up
 all the CPU% of jobs, a disparity of 30%. From the WLM activity
 report, by comparing it with the total APPL% used divided by the total
 assigned CPs also produced result of 40+%. Hence, the MVS CPU
 percentage should have been 40+%.  Anyone out there have encountered
 this problem before? Any reported fix to resolve this problem? Btw, we
 are still at z/OS v1.4, running in the sysplex.

can you say capture ratio?

some past posts mentioning effect:
http://www.garlic.com/~lynn/2005m.html#16 CPU time and system load
http://www.garlic.com/~lynn/2006v.html#19 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2007g.html#82 IBM to the PCM market
http://www.garlic.com/~lynn/2007t.html#23 SMF Under VM

really strange the first time you here it ... say the elapsed minus
measured wait is much larger than the individually measured cpu useages
(especially having vm background where it actually does capture nearly
every cycle).

the referenced SMF Under VM post includes some number of corporate MVS
URLs that go into much more detail.

old email
http://www.garlic.com/~lynn/2006v.html#email800717

discussing moving workload from 168 to 4341s w/o taking
into account capture ratio.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: IBM LCS

2008-01-08 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Ben Alford) writes:
 I worked with an IBM 360/65 that had one MB of IBM LCS and later 2 MB
 of LCS from some other OEM when I was a student.  This 360/65 had 3
 frames of main storage, each with 256 KB of storage.  The way I
 remember it, we could ask for more LCS than main, but it was much slower.
 You could specify that data in COMMON as well as data buffers reside
 in the LCS under OS MVT. The LCS slowdown was noticable because the CPU
 had to wait longer for all data coming in/out of LCS and, of course, the
 CPU time was still ticking.  I don't know if it slowed down channel
 access much.

i believe had cornell had 360/75 with 8mbytes of ampex lcs.

standard 360/75 (and 360/65) storage was 750 nsec for 8byte interleaved
access (some claim that 360/75 was a hardwired 360/65 to get extra
thruput).

some installations setup LCS as extension of (executable) main memory
... that just ran slower. other installations used it for emulated
electronic disks ... with emulated data transfers; this could be things
like hasp buffer records and/or executable images.

i believe the ampex lcs had 8msec access (better than 10 times slower).

if you look in 360/65, 360/67, and 360/75 functional characteristic
instruction timings ... one of the things will be a prorated part of 750
nsecs for (the 8byte) instruction fetch i.e. 2byte instructions will
include 1/4th of 750nsecs for instruction fetch, 4byte instructions will
include 1/2th of 750nsecs for instruction fetch, and 6byte instructions
will include 3/4th of 750nsecs for instruction fetch.

this can be somewhat inaccurate for branch operations ... a double-word
aligned (2byte) BR instruction ... will incur the full 750nsecs
instruction fetch ... since the rest of the double word won't be used
for any other instruction operations.

LCS was both an issue of slower electronic memory as well as longer
physical signal latency.

by comparison 3090 expanded storage was the same electronic memory but
physical packaging resulted in longer signal latency. as a result, it
was purely packaged as sort of emulated i/o transfer ... but with a much
wider bus (so the latency was amortized over much larger amount of data)
and syncronized transfer instructions ... to eliminate the significant
pathlength overhead in MVS for asynchronous i/o operations.

later physical packaging (and improved caching infrastructure)
eliminated the need for expanded storage ... however, emulated expanded
storage (as part of LPAR configuration) lingered on, compensating for
other system issues.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: JCL parms

2008-01-06 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Shmuel Metz  , Seymour J.) writes:
 No. The original design of OS/360 was that all programs were
 subroutines.  A program written to be called as a jobstep task could
 also be called via LINK or ATTACH from another program. It was never
 designed for programs to assume that they were jobstep tasks, although
 IBM may have written code[1] that assumed that.

some number of univ. developed their own student job monitors
(predating availability of watfor).

we had 709 running fortran monitor for student jobs ... running
tape-to-tape ... tapes were physical moved back and forth between 709
and 1401 ... with the 1401 handling front-end unit record processing.
student job elapsed time was on the order of a second.

moving to 360/65 with os360 and hasp ... minium elapsed time for 3step
(student) fortran compile, link-edit and go ... was on the order of
30seconds ... effectively all of it (constantly re)executing job
scheduler.

various univ one-step job monitors attempted to attach compile,
link-edit, and go for student jobs (eliminating job scheduler overhead).

i had done some custom optimization with very careful reorganization of
stage-II sysgen output cards ... in order to very carefully physically
place files and PDS members on disk ... for optimal arm seek operation.
this had improved 3step (effecitvely job scheduler) from 30seconds to
about 13seconds (for typical student fortran job).

part of old presentation at aug68 SHARE meeting in boston discussing
very careful os360 stage-2 sysgen optimization ... in addition to
separate activity rewritting large amounts of cp67 virtual machine
kernel 
http://www.garlic.com/~lynn/94.htmL#18 CP/67  OS MFT14

of course when watfor became available ... it eclipsed a lot of the work
on one-step job monitors.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: z/OS and VM Control Blocks

2008-01-05 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Lindy Mayfield) writes:
 It was only a question! (-: I certainly didn't mean to upset the status
 quo. ((--::

recent post from how i tried to handle it long ago and far away
... before OCO in a ipcs alternative that i had implemented in rex(x).
http://www.garlic.com/~lynn/2007v.html#46 folklore indeed

it was eventually in use by all internal locations and PSRs ... even tho
there was a decision not to release it to customers.
http://www.garlic.com/~lynn/subtopic.html#dumprx

it sort-of started out as a demonstration of the functionality of the
new rex ... the stated objective was in half-time over period of
3months, i would re-implement ipcs in rex(x) with ten times the function
and it would run ten times faster (little slight of hand since the base
ipcs was all implemented in assembler).

i had access to softcopy of all the base source files (including control
block definitions) and documentation. however, nearly all this stuff had
been created for hardcopy/printed output. the particular issue was how
to come up with online appropriate information display including being
able to tailor to problem being dealt with (very crude online context
sensitive orientation).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: For the History buff's an IBM 5150 pc

2008-01-02 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Schwarz, Barry A) writes:
 The Apple I went on sale in l976 so the author seems to have limited
 view of what a PC is.

5100 pc was announced sep75.
http://www-03.ibm.com/ibm/history/exhibits/pc/pc_2.html

predating the 5150 pc in 1981.
http://www-03.ibm.com/ibm/history/exhibits/pc/pc_1.html

from above:

One of the earliest IBM attempts to move computing into the hands of
single users was the SCAMP project in 1973. This six-month development
effort by the company's General Systems Division (GSD) produced a
prototype device dubbed Special Computer, APL Machine Portable (SCAMP)
that PC Magazine in 1983 called a revolutionary concept and the
world's first personal computer. To build the prototype in the short
half-year allowed, its creators acquired off-the-shelf materials for
major components. SCAMP could be used as a desktop calculator, an
interactive APL programming device and as a dispenser of canned
applications. The successful demonstration of the prototype in 1973 led
to the launch of the IBM 5100 Portable Computer two years later.
 
... snip ...

of course, one could claim that work by science center
http://www.garlic.com/~lynn/subtopic.html#545tech

creating cp67 virtual machines in the mid-60s, enabled the deployment of
CMS personal computing.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: It keeps getting uglier

2008-01-01 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Thompson, Steve) writes:
 WANG with their WANG/VS systems came up with an idea that would have met
 your problem. The workstations had a button that caused the CEC to
 download microcode for DP/WP (Data Processing / Word Processing). So
 your workstation could switch between types of work. [ASCII based S/360
 type architecture on steroids.]

 IBM attempted to make a product to market against WANG. They couldn't
 figure out how to do it economically. Problem was the distance from the
 tree to the eyeballs of the powers that were was about 2 inches. The
 problem was solved by having a PC that could do word processing while
 having an emulator (3270) for the 3270 type tasks, and then the PC would
 handle the word processing type tasks.

for some topic drift ... past posts about PC getting early market
traction because it sold for about the same price as 3270 and
in single desktop footprint could get 3270 terminal emulation
and some local (personal) computing
http://www.garlic.com/~lynn/subnetwork.html#emulation

OPD's displaywriter was in WANG wordprocessing market segment.

ROMP was early 801 risc chip originally designed to be used for
displaywriter follow-on product. when that was killed, the group looked
around for something else to use the machine for and settled on the unix
workstation market. they got the company that had done the pc/ix port to
do one for the displaywriter follow-on and renamed the product the
PC/RT (and the software AIX).
http://www.garlic.com/~lynn/subtopic.html#801

The PC/RT followon was the RS6000 with RIOS chipset. RS6000 was
relogo'ed as hardware platfrom by some number of other companies
... including WANG as it got out of the hardware business. As part of
that change-over, some number of the people from RS6000 group went to
WANG.

old time article from nov80 mentioning wang, word-processing market
http://www.time.com/time/magazine/article/0,9171,950498,00.html?iid=chix-sphere

page mentioning some of the old/70s wordprocessing market
http://www.computermuseum.li/Testpage/DedicatedWPMicros.htm

article on demise of dedicated wordprocessor boxes; having given away to
multi-application PCs
http://www.cbronline.com/article_cg_print.asp?guid=265D4108-6F66-49EC-80B1-E51D2AA8876E

note that there was a project in the early 80s to replace the wide
variety of internal microprocessors with 801/risc processors (including
the ones used for displaywriters). this included all the processors in
the low and mid-range 370s ... at the time, the 4341-followon (4381) was
going to use a 801/risc processor; the s/38-followon (as/400) was going
to use a 801/risc processor ... and lots of others were also. A special
flavor of 801/risc, Iliad had additional features for supporting
emulation of other architectures ... some old 801-related email,
including mention of work on Iliad chips
http://www.garlic.com/~lynn/lhwemail.html#801

for other topic drift, old email mentioning 43xx ... e-architecture
machines
http://www.garlic.com/~lynn/lhwemail.html#43xx

i.e. while the high-end 370 came up with 370-xa (code named 811 for
nov78 document publication date), the low/mid range came up with
e-architecture (where dos/vs to vse came from).

for some archeological trivia, i contributed to the document killing
801/risc idea for the 4341-followon.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: It keeps getting uglier

2008-01-01 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Ed Gould) writes:
 Rick:

 FYI according to [EMAIL PROTECTED] the model 85 was the 1st 360 to have
 a HSB.

re:
http:/www.garlic.com/~lynn/2007v.html#98 It keeps getting uglier

not just me:
http://www-03.ibm.com/ibm/history/history/year_1968.html

from above:

Additions to System/360 family are announced, including the Model
85. The high-speed cache, or buffer memory, found in the System/360
Model 85, is the first in the industry. The cache memory makes highly
prioritized information available at 12 times the speed of regular,
main-core memory.

... snip ... 

not only first 360 ... but first in the industry.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: It keeps getting uglier

2007-12-31 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.

[EMAIL PROTECTED] (Ed Gould) writes:
 Its been at least 30 years I will yield to your memory. I just
 remember it giving me a blow by blow description on the format of the
 instructions and how it worked plus timings. If it was another
 manual, fine.

360/30 functional characteristics ... see if this is
what you remember
http://bitsavers.org/pdf/ibm/360/funcChar/GA24-3231-7_360-30_funcChar.pdf

This has directory for -0, -6, and -7 360 Principles of Operation
http://bitsavers.org/pdf/ibm/360/poo/

past posts in thread:
http://www.garlic.com/~lynn/2007v.html#21 It keeps getting uglier
http://www.garlic.com/~lynn/2007v.html#68 It keeps getting uglier

for other drift, directory of some 360 FE manuals
http://bitsavers.org/pdf/ibm/360/fe/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: It keeps getting uglier

2007-12-31 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Ed Gould) writes:
 To refresh my memory was the 370 the first public machine that used
 the HSB? My memory says yes but as we have seen the POPS and FUNC
 manual are indeed different. There were quite a few machines I had no
 exposure to like the 44 and the 67 and the (1)95 among others

360/85 was first machine with cache.

this web page has some number of ibm product announcements
http://ed-thelen.org/comp-hist/IBM-ProdAnn/index.html

including the 360/85
http://ed-thelen.org/comp-hist/IBM-ProdAnn/360-85.pdf

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: It keeps getting uglier

2007-12-31 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Doug Fuerst) writes:
 How do you figure that reverse engineering is an acceptable method of
 RD or design? Reverse engineering is an easy way to replicate a
 design. Since the company creating the product, in this case IBM,
 spent millions developing the machine, they would be entitled to some
 exclusivity. How fair is it for every competitor to reverse engineer
 their machines to mimic the IBM box, and not compensate IBM for that?
 At least MOBO manufacturers use different chipsets and moderately
 different designs. I don't believe they are reverse engineering Intel
 boards, nor is AMD reverse engineering Core Duo's.

clone controller business was supposedly primary motivation
for the future system project ... lots of past posts
http://www.garlic.com/~lynn/subtopic.html#futuresys

i've posted before about being undergraduate and trying to get the 2702
communication controller to do some stuff and it turned out it couldn't
... which was somewhat motivation for the univ. to start a clone
controller project ... reverse engineering the ibm channel interface
and building a channel interface card for Interdata/3 ... programmed
to emulate 2702. this was written up blaiming four of us for some
part of the clone controller business
http://www.garlic.com/~lynn/subtopic.html#360pcm

article from former corporate executive ... including some number of
comments about future system project
http://www.ecole.org/Crisis_and_change_1995_1.htm

including the following:

IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead that
the competition would never be able to keep up, and to have such a high
level of integration that it would be impossible for competitors to
follow a compatible niche strategy. However, the project failed because
the objectives were too ambitious for the available technology.  Many of
the ideas that were developed were nevertheless adapted for later
generations. Once IBM had acknowledged this failure, it launched its
'box strategy', which called for competitiveness with all the different
types of compatible sub-systems. But this proved to be difficult because
of IBM's cost structure and its RD spending, and the strategy only
resulted in a partial narrowing of the price gap between IBM and its
rivals

... snip ...

above also referenced here
http://www.garlic.com/~lynn/2007u.html#17 T3 Sues IBM To Break its Mainframe 
Monopoly

there was recent question about some number of people departing and
going to work on vax/vms ... which led to joke about head of POK having
been a major contributor to VMS ... long winded story involving
termination of Future System project and mad rush to get stuff
back into the 370 product pipeline:
http://www.garlic.com/~lynn/2007v.html#96 source for VAX programmers
http://www.garlic.com/~lynn/2007v.html#100 source for VAX programmers

there is some case to be made that the Future System distraction and
leting the 370 product pipeline dry up contributed to giving the
processor clones a foothold in the market.

past reference to amdahl giving a talk at mit in the early 70s that
may be at least partially construed as referring to this ... recent
reference
http://www.garlic.com/~lynn/2007t.html#68 T3 Sues IBM To Break its Mainframe 
Monopoly

and other parts of postings in that thread:
http://www.garlic.com/~lynn/2007t.html#69 T3 Sues IBM To Break its Mainframe 
Monopoly
http://www.garlic.com/~lynn/2007t.html#71 T3 Sues IBM To Break its Mainframe 
Monopoly
http://www.garlic.com/~lynn/2007t.html#76 T3 Sues IBM To Break its Mainframe 
Monopoly
http://www.garlic.com/~lynn/2007t.html#77 T3 Sues IBM To Break its Mainframe 
Monopoly
http://www.garlic.com/~lynn/2007u.html#1 T3 Sues IBM To Break its Mainframe 
Monopoly
http://www.garlic.com/~lynn/2007u.html#2 T3 Sues IBM To Break its Mainframe 
Monopoly

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


virtual appliance

2007-12-29 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.


[EMAIL PROTECTED] (David Boyes) writes:
 Put bluntly, see my other note on using CMSDDR. Getting a 1 pack VM
 system up on the Flex box works MUCH better, CMSDDR understands CMS file
 structure so you can just pass images of the volumes over, and as long
 as you restore the entire volumes, z/OS won't even know it happened. 

 Useful note: when you FTP the CMSDDR output files between the VM system
 on the Flex box and the new system, use TYPE E, MODE B before you do the
 PUT or GET in the FTP session. This tells the FTP server on the other
 end to preserve character set and file parameters, so you won't have to
 worry about it. 

 Also, keep in mind that your Flex system can create AWSTAPE format
 files, and that CMS has a AWSTAPE pipe stage that can feed that to CMS
 utilities. If you're worried about licensing for VM on the Flex box, I
 think VM/370 (which IBM does not complain about usage) will run CMSDDR. 

 I'll have to look into creating a IPLable system image for people in
 your situation. Shouldn't be too hard. 

... for other topic drift ... in current virtualization genre there is a
lot being made of virtual appliance ... somewhat akin to what we use
to call service virtual machines  but also gaining a lot of
cross-over with a system image that can be quickly and trivially
deployed ... usually for very specific purposes. In the security genre,
such images can also trivially go *poof* (and any compromises, possibly
because of an internet connection, evaporates at the same time).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: It keeps getting uglier

2007-12-28 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (John P. Baker) writes:
 The Diagnose instruction has been documented in every Principles of
 Operation manual issued for the S/360 architecture and for all
 subsequent superseding architectures, and in every case, has
 specifically stated that the functions performed by the Diagnose
 instruction are not published, but may impact any and all aspects of
 system operation, and if invoked by a user application built without
 access to that unpublished documentation, may negatively impact the
 proper functioning of the machine, requiring a Power On Reset and/or
 the assistance of a Hardware Support Engineer to bring the system back
 into proper working order.

re:
http://www.garlic.com/~lynn/2007v.html#21 It keeps getting uglier

it also says that the operation of the diagnose may be model dependent.

less than a month since it has been 40yrs since I was introduced to
(virtual machine) cp67 system ... three people came out to the
university from cambridge science center
http://www.garlic.com/~lynn/subtopic.html#545tech

to install cp67. while an undergraduate I did a lot of rework and
optimization of the cp67 kernel. i had also done a lot of work on os/360
optimization ... for the workload at the univ. i gave a presentation at
the aug68 share meeting in boston on some of that work ... part of
that presentation
http://www.garlic.com/~lynn/94.html#18 CP/67  OS MFT14

one of the other things i did was develop a fast-path ccw translation
for cms disk i/o when running in a virtual machine (original cms was
implemented to be able to run on bare 360/40). I did this by defining a
new channel program op-code for disk read/writes ... which acted as an
immediate operation ... held the virtual SIO busy until the operation
had completed and then presented CC=1, CSW STORED.

I got some grief from the people at the science center since i was
violating the 360 principles of operation. however, it was a useful
performance improvement ... and so it was explained to me that I could
use the diagnose instruction ... since the diagnose instruction was
defined as being model dependent ... and for CP67 ... and artificial
virtual machine 360 *model* could be defined where the diagnose
instruction acted as defined by CP67 (w/o violating the principles of
operation).

misc. past posts mentioning model dependent diagnose instruction:
http://www.garlic.com/~lynn/96.html#23 Old IBM's
http://www.garlic.com/~lynn/2001b.html#32 z900 and Virtual Machine Theory
http://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling 
convention
http://www.garlic.com/~lynn/2002h.html#62 history of CMS
http://www.garlic.com/~lynn/2003.html#60 MIDAS
http://www.garlic.com/~lynn/2003k.html#52 dissassembled code
http://www.garlic.com/~lynn/2003m.html#36 S/360 undocumented instructions?
http://www.garlic.com/~lynn/2003p.html#9 virtual-machine theory
http://www.garlic.com/~lynn/2004.html#8 virtual-machine theory
http://www.garlic.com/~lynn/2004d.html#66 System/360 40 years old today
http://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
http://www.garlic.com/~lynn/2005b.html#23 360 DIAGNOSE
http://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
http://www.garlic.com/~lynn/2007f.html#33 Historical curiosity question
http://www.garlic.com/~lynn/2007p.html#72 A question for the Wheelers - 
Diagnose instruction

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 2007 Year in Review on Mainframes - Interesting

2007-12-20 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.


[EMAIL PROTECTED] (Conmackie, Mike) writes:
 And the money I've paid into Social Security all my life will be
 returned in my retirement with interest !

ss is pay as you go system ... not a fully funded retirement plan.  it
is one of the reasons why they are concerned about the ratio of people
paying-in to the number supported on retirement. SS historical ratio
table 1940-2006:
http://www.ssa.gov/history/ratios.html

This can drastically tip with baby boomers moving from paying to
collecting. The first baby boomer collects social security
http://abcnews.go.com/WN/LifeStages/story?id=3732745page=1

there is also some gimick on how much is paid, it is currently 15.3% ...
but for standard salary workers ... the company has to pay half of it
over and above the salary ... and then there is the other half deducted
from the salary. This is readily seen in tax returns for self-employed
workers where they have to pay the full 15.3%. for most purposes,
eliminate the facade and have it restructured so the employers paid the
full 15.3% before paying salary (theoritically reducing salaries paid
correspondingly) ... with it never showing up for individual employees
at all.

in past 10-15 there have been some number of companies going under
(and/or declared bankruptcy) because their pay as you go retirement
systems sometimes reached their largest single expense 
http://www.skeptically.org/curpol/id7.html
... and federal gov. having to assume the payment
http://en.wikipedia.org/wiki/Pension_Benefit_Guaranty_Corporation

some number of posts related to unfunded liabilities growing to
largest part of the budget and swamping the federal gov ... even if
everything else in the budget is eliminated.
http://www.garlic.com/~lynn/2007j.html#91 IBM Unionization
http://www.garlic.com/~lynn/2007j.html#93 IBM Unionization
http://www.garlic.com/~lynn/2007s.html#1 Translation of IBM Basic Assembler to 
C?
http://www.garlic.com/~lynn/2007t.html#13 Newsweek article--baby boomers and 
computers
http://www.garlic.com/~lynn/2007t.html#18 Newsweek article--baby boomers and 
computers

(federal) comptroller general (appointed in the mid-90s for 15yr term)
has been making references that congress for at least the past 50 yrs
has been capable of simple middleschool arithmatic; recent reference:
http://www.garlic.com/~lynn/2007q.html#7 what does xp do when system is copying

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: It keeps getting uglier

2007-12-19 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.


[EMAIL PROTECTED] (Phil Payne) writes:
 Has anyone from the Hercules team read IBM's rather stunning admission
 (on the above page - paragraph 176) that there is a confidential
 version of the PoP?  Their words, not mine.

there has been the (confidential) architecture redbook (distributed in
red 3ring binders) ... implemented in (cp67/)CMS script file ... with
conditional formating to produce either the princples of operation
subset ... or the full (confidential) architecture redbook.

for other topic drift ... recent post mentioning cms script, gml, 
sgml, html, system/r, rdbms ... etc 
http://www.garlic.com/~lynn/2007v.html#17 Amazon's Simple Database

misc. past postings mentioning architecture redbook
http://www.garlic.com/~lynn/2003f.html#52 ECPS:VM DISPx instructions
http://www.garlic.com/~lynn/2004b.html#57 PLO instruction
http://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
http://www.garlic.com/~lynn/2004k.html#45 August 23, 1957
http://www.garlic.com/~lynn/2005b.html#25 360POO
http://www.garlic.com/~lynn/2005i.html#40 Friday question: How far back is PLO 
instruction supported?
http://www.garlic.com/~lynn/2005j.html#39 A second look at memory access 
alignment
http://www.garlic.com/~lynn/2005j.html#43 A second look at memory access 
alignment
http://www.garlic.com/~lynn/2005k.html#1 More on garbage
http://www.garlic.com/~lynn/2005n.html#48 Good System Architecture Sites?
http://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
http://www.garlic.com/~lynn/2006s.html#53 Is the teaching of non-reentrant 
HLASM coding practices ever defensible?
http://www.garlic.com/~lynn/2007f.html#7 IBM S/360 series operating systems 
history
http://www.garlic.com/~lynn/2007i.html#31 Latest Principles of Operation
http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar'
http://www.garlic.com/~lynn/2007u.html#30 folklore indeed

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: About 1 in 5 IBM employees now in India - so what ?

2007-12-18 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.

[EMAIL PROTECTED] writes:
 Nothing wrong with India - but selfishly, I want jobs where I am, even
 though I have it better off than those who need jobs there.

 Of course, in a global economy, you have a lot better chance to sell
 your wares in countries that you spend money in.

there is also issue that knowledge work is pretty distance insensitve in
a global economy ... and knowledge work frequently is one of the highest
valued work.

recent posts about recently published study on educational ranking of
different countries
http://www.garlic.com/~lynn/2007u.html#78 Educational ranking
http://www.garlic.com/~lynn/2007u.html#80 Educational ranking

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: The future of PDSs

2007-12-12 Thread Anne Lynn Wheeler
[EMAIL PROTECTED] writes:
 IBM created the PDS a long time ago - giving us some conveniences that
 fit within its OS design.Other computer companies either did not
 see this advantage or had OS structures that handled it other ways.

 Do we use PDSs now because that's what we have been using for decades?
 Or is it possible to still keep advantages of our OS and go in a
 different direction?

part of the original PDS design was trading off i/o resource vis-a-vis
real-storage resource ... at a time when real storage was extremely
constrained.  some number of other systems ... especially those that
came along later when real storage was a much less constrained resource
... made other kinds of implementation tradeoffs

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: T3 Sues IBM To Break its Mainframe Monopoly

2007-12-09 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Shmuel Metz  , Seymour J.) writes:
The only operating systems that are legal to run on Hercules are Linux,
and MVS 3.8 (I think). 

 Shirley all of these are legal:

BOS/360
BPS/360
CALL/360
CP/67
DOS/VSE
DOS/360
MTS
OS/VS1
OS/VS2 R1.7 (SVS)
TOS/360
TSS/360
VMF/370
  

this recent post references 
http://www.garlic.com/~lynn/2007u.html#18 Folklore references to CP67 at 
Lincoln Labs

some (virtual machine) cp67 historical references from Melinda's VM
paper at
http://www.princeton.edu/~melinda/

mentioning that very early, two new commercial companies were formed to
offer (virtual machine) cp67-based commercial timesharing services
http://www.garlic.com/~lynn/subtopic.html#timeshare

drawing people heavily from Science Center, 
http://www.garlic.com/~lynn/subtopic.html#545tech

Lincoln Labs, and Union Carbide.

It also makes references to MTS folklore having been initially built on
top of Lincoln Labs LLMPS.

There was an OS/360 operators console application called ONLINE/OS that
provided CMS-like interactive functionality. It was most frequently used
with PCP ... but could also be used on MFT and MVT.

CP67 had a function that could save a virtual memory image of a
running virtual machine. This was used with CMS to get rapid startup.
However, a technique was developed that could also checkpoint a
virtual memory image of OS/360 ... at point when I/O had been quiesed
... allowing OS/360 quick start in a virtual machine (just restore the
saved virtual memory image). This could be used in conjunction with
restoring a saved image of OS/360 where ONLINE/OS had already been up
and running.

old posts mentioning online/os
http://www.garlic.com/~lynn/2001i.html#34 IBM OS Timeline?
http://www.garlic.com/~lynn/2001n.html#45 Valid reference on lunar mission data 
being unreadable?
http://www.garlic.com/~lynn/2004.html#48 AMD/Linux vs Intel/Microsoft
http://www.garlic.com/~lynn/2004d.html#33 someone looking to donate IBM 
magazines and stuff
http://www.garlic.com/~lynn/2007b.html#50 Is anyone still running


part of Melinda's paper has appendix mentioning ONLINE/OS was never
released outside the company (although I had a copy of it at the
university in the 60s, also much of the original work had been done by a
person on assignment from Union Carbide) ref:

E.C. Hendricks, C.I. Johnson, R.D. Seawright, and D.B. Tuttle,
Introduction to ONLINE/OS and ONLINE/OS User’s Guide, IBM Cambridge
Scientific Center Reports 320-2036, 320-2037, March, 1969

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Open z architecture and Linux questions

2007-12-07 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Wayne Driscoll) writes:
 Not really answering either question, but on the topic of Q2, the recent
 port of Open Solaris to System z was done only under z/VM, with no
 attempt to get it to run under LPAR mode because of the increased amount
 of work LPAR mode would have added (paraphrased from the company that
 did the porting work).  Now that the hard part of getting Linux to run
 in an LPAR has been done, I don't see the need to eliminate it, but it
 would be interesting to see the percentage of Linux on z usage in LPAR
 vs z/VM.

long ago and far away, similar arguments were made for both gold/au and
aix/370.  issue was that field engineering had lots of diagnostic,
recording, and recovery requirements for servicing customer machines
(EREP, RAS, etc).

the effort to add mainframe EREP/RAS functionality to any of these ports
was several times larger than just doing the straight forward port
(while vm was able to satisfy the requirement, including for any of its
guest operating systems). however, over the yrs, there has been more and
more of virtual machine support functionality being moved into LPAR and
service processor operation.

slightly related recent post
http://www.garlic.com/~lynn/2007t.html#77 T3 Sues IBM To Break its Mainframe 
Monopoly

also in this post
http://www.garlic.com/~lynn/2007u.html#8 Open z/Architecture or Not

the reference to various OCO related material from vmshare archives, the
reference to TUCC's MVS/370 to MVS/XA conversion experiences describes
part of the success was having access to SIE and VM/SF information
http://vm.marist.edu/~vmshare/browse?fn=OCOCMEft=NOTE

... part of difficulty discussion from above ...

The key to gaining performance from the primary guest operating system
is the I/O Passthru feature of SIE.  This allows the guest system to
initiate I/O directly to the I/O subsystem without intervention from
VM/SF.  The SIE microcode assist is a documented feature, however the
portion that supports I/O Passthru is not documented.  As a result it
took us two months to correct this problem.  The problem was
extraordinarly difficult to analyze, because the symptoms were
noticeable only after the problem occured.  We had all of MVS/370's I/O
devices in I/O Passthru, including the Memorex 1270 devices.  In certain
circumstances, such as MVS disabling for 09x wait, VM/SF decided to
remove all of the I/O from I/O Passthru.  After taking all devices out
of I/O passthru, VM/SF will then put them all back in.  Performing this
function requires that VM/SF perform a Modify Subchannel to each device
to accomplish this.

... snip ..

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Crypto Related Posts

2007-12-07 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.


Anne  Lynn Wheeler [EMAIL PROTECTED] writes:
 the x9.59 financial standard approach was then to fix the underlying
 weakness, lack of strong authentication ... which also then eliminated
 needing to hide the transaction information from crooks (since the
 information was useless w/o the proper authentication). some of this is
 discussed in the posts concerning the naked transaction metaphor
 http://www.garlic.com/~lynn/subintegrity.html#payments

re:
http://www.garlic.com/~lynn/2007t.html#61 Re: Crypto Related Posts

some recent related:

Why should merchants keep credit card data?
http://www.networkworld.com/news/2007/120607-why-should-merchants-keep-credit.html

the proposed approach was raised at least a decade ago ... it addresses
harvesting data-at-rest in repostories ... but doesn't address the
evesdropping and skimming attacks.
http://www.garlic.com/~lynn/subtopic.html#harvest

previous business process difficulties (with the suggested approach) was
availability of online connectivity (giving merchants access to the
necessary data for required/mandated business operations). the pervasive
growth of internet connectivity has somewhat mitigated those issues.

Can mid-market merchants comply with PCI standards?
http://www.networkworld.com/news/2007/120607-can-mid-market-merchants-comply-with.html

another approach that has been tried is the one time account numbers
(as an approach to eliminating replay attacks ... aka eliminating
being able to use information from previous transactions for fraudulent
activity).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: T3 Sues IBM To Break its Mainframe Monopoly

2007-12-06 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Jim Mulder) writes:
   But actually it did not take decades, as the original release of 
 MVS/XA in 1982 functionally supported 16-way SMP.  Of course there
 were no such processors at the time (nothing greater than 2-way until 
 the 4-way 3084), but it did run for testing purposes using 16 virtual
 CPUs on a modified version of VM.  Of course, as larger processors
 were actually built, additional was done (and continues to be done)
 to address performance/scaling issues. 

re:
http://www.garlic.com/~lynn/2007t.html#76 T3 Sues IBM To Break its Mainframe 
Monopoly

well, sort of. 

one of the things to get rapidly to 16-way smp implementation, as well
as addressing performance/scaling issues, was to relax standard 370
cache consistency rules (and, in fact, most SMP vendors going to larger
numbers of processors have almost always involved how to deal with cache
consistency issues).

remember that compareswap ... misc. posts about smp and/or compareswap
http://www.garlic.com/~lynn/subtopic.html#smp

was invented by charlie (compare-and-swap was chosen because CAS are
charlie's initials) at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

and i've mentioned before the original difficulty of getting
compare-and-swap into 370 architecture. Some of the difficulties
is why the example of program failure still appears in the 
compare-and-swap writeup
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6.1?SHELF=DZ9ZBK03DT=20040504121320

i've frequently claimed that the 801 risc effort 
http://www.garlic.com/~lynn/subtopic.html#801

was attempt to go to the opposite extreme from what went on
in FS
http://www.garlic.com/~lynn/subtopic.html#futuresys

and also claimed the lack of cache consistency in 801 risc was adverse
reaction to the heavy performance penalty paid in 370 by its strong
cache consistency requirement. in fact, it wasn't until somerset (joint
ibm, motorola, apple, et all) for power/pc that there was (risc) work on
smp and addressing cache consistency.

in any case, part of doing 16-way smp (and relaxing 370 cache
consistency rules) was much more detailed attention paid to every piece
of code (because of the associated hardware changes for relaxed cache
consistency).

for some more topic drift, in just the 3084 time-frame, both mvs and
(standard) vm had effort to go thru all kernel data  storage management
and make sure things were cache-line sensitised. the issue was the
increased probability that more than one cache might be accessing
different data items which happened to overlap in the same cache line
(resulting in significant cache line thrashing). The claim at the time
was that this effort resulted in 5-10 percent increased system thruput
(for 4-way). As the number of independent caches that had to be
coordinated ... the probability increases that there is going to be some
kind of cache interference.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: T3 Sues IBM To Break its Mainframe Monopoly

2007-12-06 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


re:
http://www.garlic.com/~lynn/2007t.html#76 T3 Sues IBM To Break its Mainframe 
Monopoly
http://www.garlic.com/~lynn/2007t.html#76 T3 Sues IBM To Break its Mainframe 
Monopoly
http://www.garlic.com/~lynn/2007u.html#1 T3 Sues IBM To Break its Mainframe 
Monopoly

for slightly more light-hearted, seasonal reference, old email with
mvs/xa tso reference from long ago and far away:

Date: 08/26/82 15:24:21

re: mvs/xa; i've seen it for myself, a 3081 system completely idle
except for one MVS/XA tso user. Response time is longer for that
single TSO user on the 3081 than for CMS doing same type of stuff on a
loaded 3033. MVS/XA is copy of the one that large internal
datacenter is using for their development work. the large
internal datacenter has gen'ed the TSO logo screen (in big block
letters)
 
 BAH
HUMBUG
 
The only thing slower than the 3081 service processer (5+ seconds to
single step one instruction) on the 3081 is possibly MVS/XA TSO. The
observation is that TSO is so slow, that you have lots of time to
syntax your next input  make sure that there are no mistakes (because
if there are ... then things will really be slow).
 
... snip ...

somewhat related to post in this thread
http://www.garlic.com/~lynn/2007t.html#40 Why isn't OMVS command integrated 
with ISPF?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Open z/Architecture or Not

2007-12-06 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Bob Shannon) writes:
 Sure. The thousands of in-stream usermods that were written prior to
 XA, and which greatly inhibited subsequent upgrades. I certainly agree
 that in the early days usermods were written to overcome functional
 deficiencies in MVS. Some, such as logical swap, were incorporated
 into MVS. Others, such as the dual master catalog mod at a large US
 insurance company, proved to be a nightmare to maintain and an even
 worse nightmare to remove.

cp67 and vm370 were notorious for user modifications ... in part because
it shipped not only with full source ... but its whole customer
maintenance infrastructure was source based (i.e. each fix shipped as
incremental source update file).

in the early 80s there was a study of local vm370 system modifications.
internal corporate local modifications were as large as the base
system ... and the share library source changes were approximately
equivalent to the internal corporate local modifications (in size and
function).

part of all this started with unbundling announcement 23jun69
http://www.garlic.com/~lynn/subtopic.html#unbundle

starting to charge for application software. however, the case was made
that kernel code could still be free (bundled).

A lot of the structural and functional enhancements that I had done to
cp67 as an undergraduate (and was picked up and shipped in the product)
was dropped in the morph from cp67 to vm370. However, I had
done the port myself ... referenced in this prior post
http://www.garlic.com/~lynn/2007t.html#69 T3 Sues IBM TO Break its Mainframe 
Monopoly

and this old email
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102

I distributed and supported the CSC/VM system (mentioned in the above)
for a large number of internal datacenters. The product picked up some
small pieces of the above as part of VM370 rel3.

However, other pieces were selected to be released as separate resource
manager product ... and also got chosen to be guinea pig for
unbundling/charging for kernel software (which met that i had to spend a
lot of time with business people ironing about the policies for kernel
software charging).
http://www.garlic.com/~lynn/subtopic.html#fairwhare
http://www.garlic.com/~lynn/subtopic.html#wsclock

because of the extensive source oriented culture ... most customers
managed to regularly track local source code changes as new releases
came out.

However, I know of (at least) one notable exception. Somehow or another,
a very early CSC/VM system was leaked to ATT longlines. Over a period
of years, they developed a large body of their own source changes
... never bothered to track releases, and migrated it to a number of
their own machines. Nearly a decade later, I was tracked down by the
ATT national marketing rep about trying to help get ATT longlines off
this ancient CSC/VM system.

The OCO-wars (object code only) in the early 80s were somewhat
turbulent.

There had been some number of commercial online timesharing services
formed from cp67 and vm370.
http://www.garlic.com/~lynn/subtopic.html#timeshare

these were somewhat similar to the internal HONE systems that worldwide
sales and marketing used
http://www.garlic.com/~lynn/subtopic.html#hone

One of these was Tymshare which in the mid-70s started providing the
vmshare online discussion forum to share members. That vmshare forum has
now been archived here
http://vm.marist.edu/~vmshare/

included in the forum archives are the OCO-war discussions from the
early 80s.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Open z/Architecture or Not

2007-12-06 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

Anne  Lynn Wheeler [EMAIL PROTECTED] writes:
 The OCO-wars (object code only) in the early 80s were somewhat
 turbulent.

re:
http://www.garlic.com/~lynn/2007u.html#6 Open z/Architecture or Not

as before the vmshare archives are at
http://vm.marist.edu/~vmshare/

old vmshare post about the original source maint infrastructure,
originally developed on cp67
http://vm.marist.edu/~vmshare/read?fn=HISTORYft=MEMOline=49

a quicky search for some OCO related posts from archive ...

this is discussion from 93 regarding the OCO's 10th b'day:
http://vm.marist.edu/~vmshare/browse?fn=OCO:BDAYft=MEMO

OCO Study Handouts from SHARE 72 (Feb89)
http://vm.marist.edu/~vmshare/browse?fn=OCOSTUDYft=NOTE

TUCC's MVS/370 to MVS/XA conversion experiences (Jun88)
http://vm.marist.edu/~vmshare/browse?fn=OCOCMEft=NOTE

VM Program Products which should be distributed with Source Code.
(started May80)
http://vm.marist.edu/~vmshare/browse?fn=VMSOURCEft=MEMO

old email mentioning vmshare ... including discussing obtaining monthly
copies of all vmshare files for putting up on the HONE system for
worldwide sales and marketing
http://www.garlic.com/~lynn/subtopic.html#hone

and other internal systems.

for other drift, one of the things i did during this period was do a
rex(x)-implementation replacement for ipcs debugging tool.
http://www.garlic.com/~lynn/subtopic.html#dumprx

part of the issue was to demonstrate that rex(s) wasn't just another
pretty exec language. the objective was to be able to replace the
existing ipcs (which was a large body of assembler implemented code)
with a

1) rex(x) implementation, 
2) that took less than half-time over 3months to implement, 
3) had ten times the function and 
4) ten times the performance (took some slight of hand)

a side-effect was that if it was decided to replace the existing
implementation ... then source would have to be shipped for the new
ipcs ... regardless of any OCO-policy.

It was never decided to ship the implementation as replacement IPCS
... but it eventually came to be used at effectively all internal
datacenters and the majority of PSRs processing customer reported
problems.

However, i was approved to give a share presentation on the
implementation ... and within a couple months after the presentation,
there were a number of similar implementations by various organizations.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: T3 Sues IBM To Break its Mainframe Monopoly

2007-12-05 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Van Dalsen, Herbie) writes:
 And who came up with XA I/O? Amdahl, in order to do MDF and share
 channels had to do floating I/O interrupts, and related control block
 structures in HSA (a la XA) to get this to work.

try 360/67 smp channel director for sharing channels and floating i/o
interrupts ... 360/67 functional characteristics can be found here
http://www.bitsavers.org/pdf/ibm/360/funcChar/

... 360/67 had 24bit  32bit addressing modes, also referenced
in this post
http://www.garlic.com/~lynn/2007t.html#75 T3 Sues IBM To Break its Mainframe 
Monopoly

after future system was killed 
http://www.garlic.com/~lynn/subtopic.html#futuresys

there was mad rush to get out 303x in parallel with starting on xa.  the
architecture documents for xa, subchannel infrastructure, access
registers, et all were referred to as 811 ... from their nov78
publication date (aka 29yrs ago). I had fairly complete copy ... they
were individually numbered copies, classified at the highest level
... requiring special double-lock security filecabinet and periodic
auditing.

apparently information about people with copies leaked out and several
people were approached ...  aka industrial espionage ... and the feds
eventually were involved.

part of it involved the extrodinary lead time to move mvs to anything
... reference to killing vm370 product because they needed all the
developers moved to pok to help meet mvs/xa delivery schedule
http://www.garlic.com/~lynn/2007t.html#68 T3 Sues IBM To Break its Mainframe 
Monopoly

even before 811 documents were published we had put together a project
to turn out a 16-way smp processor on a very aggresive delivery
schedule. it was going great guns until it came to the attention of the
head of pok that it would possibly be decades before mvs ever had 16-way
smp support (some people were then invited to never show up at the pok
site again). misc. past posts mentioning smp support (and/or
compare-and-swap instruction)
http://www.garlic.com/~lynn/subtopic.html#smp

there was small advanced technology conference in pok spring of 77 (a
little over 30yrs ago) with presentations on both 16-way smp and 801
risc ... for lots of topic drift, misc. 801 risc posts
http://www.garlic.com/~lynn/subtopic.html#801

misc. post posts mentioning 811
http://www.garlic.com/~lynn/2000d.html#21 S/360 development burnout?
http://www.garlic.com/~lynn/2002d.html#8 Security Proportional to Risk (was: 
IBM Mainframe at home)
http://www.garlic.com/~lynn/2002d.html#9 Security Proportional to Risk (was: 
IBM Mainframe at home)
http://www.garlic.com/~lynn/2002d.html#49 Hardest Mistake in Comp Arch to Fix
http://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
http://www.garlic.com/~lynn/2002j.html#28 ibm history note from vmshare
http://www.garlic.com/~lynn/2002k.html#34 30th b'day  original vm/370 
announcement letter (by popular demand)
http://www.garlic.com/~lynn/2002m.html#28 simple architecture machine 
instruction set
http://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
http://www.garlic.com/~lynn/2003c.html#1 Wanted: Weird Programming Language
http://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
http://www.garlic.com/~lynn/2004g.html#24 |d|i|g|i|t|a|l| questions
http://www.garlic.com/~lynn/2005j.html#34 IBM Plugs Big Iron to the College 
Crowd
http://www.garlic.com/~lynn/2005j.html#35 IBM Plugs Big Iron to the College 
Crowd
http://www.garlic.com/~lynn/2005p.html#18 address space
http://www.garlic.com/~lynn/2005s.html#26 IEH/IEB/... names?
http://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces
http://www.garlic.com/~lynn/2006f.html#20 Old PCs--environmental hazard
http://www.garlic.com/~lynn/2006j.html#27 virtual memory
http://www.garlic.com/~lynn/2006j.html#31 virtual memory
http://www.garlic.com/~lynn/2006n.html#16 On the 370/165 and the 360/85
http://www.garlic.com/~lynn/2006n.html#27 sorting was: The System/360 Model 20 
Wasn't As Bad As All That
http://www.garlic.com/~lynn/2006t.html#16 Is the teaching of non-reentrant 
HLASM coding practices ever defensible?
http://www.garlic.com/~lynn/2007g.html#57 IBM to the PCM market(the sky is 
falling!!!the sky is falling!!)
http://www.garlic.com/~lynn/2007k.html#28 IBM 360 Model 20 Questions
http://www.garlic.com/~lynn/2007l.html#71 IBM 360 Model 20 Questions


misc. posts mentioning 16-way smp support
http://www.garlic.com/~lynn/95.html#5 Who started RISC? (was: 64 bit Linux?)
http://www.garlic.com/~lynn/95.html#6 801
http://www.garlic.com/~lynn/95.html#11 801  power/pc
http://www.garlic.com/~lynn/97.html#5 360/44 (was Re: IBM 1130 (was Re: IBM 
7090--used for business or
http://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
http://www.garlic.com/~lynn/2002i.html#82 HONE
http://www.garlic.com/~lynn/2002p.html#58 AMP  vs  SMP

Re: T3 Sues IBM To Break its Mainframe Monopoly

2007-12-05 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Van Dalsen, Herbie) writes:
 And who came up with XA I/O? Amdahl, in order to do MDF and share
 channels had to do floating I/O interrupts, and related control block
 structures in HSA (a la XA) to get this to work.

re:
http://www.garlic.com/~lynn/2007t.html#75 T3 T3 Sues IBM To Break its Mainframe 
Monopoly
http://www.garlic.com/~lynn/2007t.html#76 T3 T3 Sues IBM To Break its Mainframe 
Monopoly

for other topic drift, a big part of the queued subchannel i/o interface
was to compensate for the enormous mvs pathlength to (re)drive i/o
... lots of i/o idle between the end of the previous operation and
initiating the next operation.

part of this was also predicated that during the 70s, systems started to
shift from being significantly processor constrained/bottlenecked to
more and more being i/o bottlenecked.

i had started pointing this out early ... and at one point some disk
division executive assinged their performance group to refute the
characterizations (i.e. over more than a decade, the relative disk
system thruput had declined by an order of magnitude; aka disks got
faster ... but other parts of systems had gotten an order of magnitude
faster still). after some period they came back and pointed out that I
had slightly understated the problem. this eventually turned into share
presentation on how to optimize systems for disk thruput.

the initial justification was that the queued interface allowd just
moving the redrive operation from mvs kernel into the microcode of the
same processor (not even offloaded to different processor), that the
microcode engineers could do a significantly better redrive
implementation that the mvs software developers.

i had worked on a 5-way smp project in 75 where the processor complex
had significant microcode capability ... some past posts
http://www.garlic.com/~lynn/subtopic.html#bounce

and i had defined a queued i/o interface ... but it included being able
to offload much of it to a separate/decidated processor. i had also
defined a queued microcode interface for dispatching ... allowing
processors to pick off work w/o having to go thru the kernel
function. this was canceled w/o shipping ... and some of the same people
then reconstituted to work on 16-way smp effort mentioned in previous
post.

i was allowed to play disk engineer in bldgs. 1415 ... misc. posts
http://www.garlic.com/~lynn/subtopic.html#disk

and one of the things i worked on was the whole testcell testing
infrastructure that was being done on stand-alone dedicated
machines. They had tried MVS at one point with a single testcell but
experienced 15mins MTBF (hangs, crashes, etc, requiring manual
intervention and MVS reboot). I undertook to rewrite the i/o supervisor
so that multiple testcells could be tested concurrently an the same
machine in an operating system environment.  This turned out to have
very low processor utilization and so the engineers started also using
the test machines for other purposes.

bldg 15 got one of the first 3033 engineering machines (outside of POK)
for disk testing. partly because things were going very well ... they
also managed to put together 16 3330 disk drives and 3830 controllers
where the machine could be concurrently used for other purposes.

this was during a period when there was heavy 3880 controller
development and testing going on.

at one pointer there was a formal product performance acceptance test
for the 3880 done in STL using standard operating system testing.

then bright and early one monday i got a call from the engineers in bldg
15 asking what i had done over the weekend to totally destroy there
system thruput. I said I hadn't done anything ... and they claimed they
hadn't done anything. So i had to start diagnosing what went one.

It turns out that over the weekend, they had replaced the 3830 (for the
string of 16 3330 drives) with a 3880 controller. The problem was that
in the move from 3830 to 3880 they went from a (fast) horizontal
microcoded processor to a much slower vertical microcoded processor
(with a separate data path). As a result, the 3880 had much slower
command and funtion processing ... and initially failed the formal
product performance acceptance test. The 3880 was then tweaked to
present early interrupt to the channel (indicating operation complete)
before the 3880 had finished all its operation. Then the 3880 could
complete its operation in parallel with the operating system processing
the interrupt and getting around to redriving i/o. This didn't bother
the standard operating system formal performance acceptance tests.

The problem was that I had significantly redone the I/O subsystem, not
only to make it much more reliable and available than standard MVS
... but interrupt processing was dramatically faster than standard MVS
... and would get around to redriving i/o  

Re: T3 Sues IBM To Break its Mainframe Monopoly

2007-12-04 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Roger Bowler) writes:
 IBM worked long and hard over many years to successfully establish
 S/360 and its successors as *the* standard computer
 architecture. Indeed for a 20 year period between about 1970 to 1990
 S/360/370/390 was almost the only architecture which would reasonably
 be considered for most business systems large or small. With the
 result that applications tied to MVS and VSE are now firmly embedded
 into the infrastructure of the various information systems (banks,
 utilities, government, airlines) that allow our society to function
 the way it does. The figure of $1 trillion invested in software
 compatible with IBM mainframes has been widely quoted.

some recent topic drift in thread that wandered into
run-up/justification for 360
http://www.garlic.com/~lynn/2007t.html#63 Remembering the CDC 6600
http://www.garlic.com/~lynn/2007t.html#65 Remembering the CDC 6600

also referenced in the above, there was an failed/aborted attempt to
take a large detour in the early 70s with the future system
effort
http://www.garlic.com/~lynn/subtopic.html#futuresys

motivated by the growth in the plug-compatible controller business ...
discussed in more detail in this recent post
http://www.garlic.com/~lynn/2007r.html#74 System 360 EBCDIC vs. ASCII

it was in the FS period that Amdahl launched his plug-compatible
processor business. In the early 70s, Amdahl gave a talk at MIT where he
was quized about it. One of the questions was what justification did he
use to raise funding for the company. The response was something about
customers had already spent $200b in 360-based application software, and
even if IBM were to totally walk away from 360 (could possibly be
considered a veiled reference to the future system project), that
software base would be sufficient to keep him in business through the
end of the century (i.e. the $200b number was less than a decade after
360 had been announced).

The future system distraction drew a lot of resources away from 370
activities. When future system was finally killed, there was mad
scramble to get software and hardware products back into the 370 product
pipeline. The lack of products in the 370 product pipeline possibly
contributed to market opportunities for clone processor vendors.

The 303x were part of that mad scramble ... which was effectively
started in parallel with what was to become 3081 and 370-xa.

recent post going into details of 303x effort
http://www.garlic.com/~lynn/2007p.html#1 what does xp do when system is copying

however, it was the long lead-time to do mvs/xa and the associated mad
scramble that led to justification to kill vm370 and transfer everybody
from the burlington mall vm370 group to pok ... supposedly as necessary
in order to meet the mvs/xa schedule. endicott eventually did manage to
acquire the vm370 product mission and keep it alive ... but effectively
had to reconstitute the group from scratch.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: T3 Sues IBM To Break its Mainframe Monopoly

2007-12-04 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


Anne  Lynn Wheeler [EMAIL PROTECTED] writes:
 some recent topic drift in thread that wandered into
 run-up/justification for 360
 http://www.garlic.com/~lynn/2007t.html#63 Remembering the CDC 6600
 http://www.garlic.com/~lynn/2007t.html#65 Remembering the CDC 6600

 also referenced in the above, there was an failed/aborted attempt to
 take a large detour in the early 70s with the future system
 effort
 http://www.garlic.com/~lynn/subtopic.html#futuresys

 motivated by the growth in the plug-compatible controller business ...
 discussed in more detail in this recent post
 http://www.garlic.com/~lynn/2007r.html#74 System 360 EBCDIC vs. ASCII

re:
http://www.garlic.com/~lynn/2007t.html#68 T3 Sues IBM To Break its Mainframe 
Monopoly

i wasn't exactly unbiased ... i had somewhat ridiculed the future system
effort during the period (drawing comparison with a cult movie that had
been playing down in central sq) and continued to work on 370 stuff
(including making statements about the resource manager, that i already
had running, was better than the theoritical pipe dreams being specified
in future system architecture documents).

slightly related old email from the period
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430
http://www.garlic.com/~lynn/2006w.html#email750827

in the mad rush, after FS was killed, contributed to decisions to pickup
some of the work (that i had continued to do) and ship it in products.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: T3 Sues IBM To Break its Mainframe Monopoly

2007-12-04 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Lindy Mayfield) writes:
 I was thinking (dreaming) today about what if when I giving training for
 MVS stuff and each student had their own mainframe instead of connecting
 to a central one.  We could do so much more.

This was somewhat the original idea behind HONE (hands-on network
environment)
http://www.garlic.com/~lynn/subtopic.html#hone

after the 23jun69 unbundling announcement
http://www.garlic.com/~lynn/subtopic.html#unbundle

Prior to unbundling, new/young system engineers acquired quite a bit of
their knowledge, working as part of team at customer installations.  the
unbundling announcement pretty much put an end to this apprentice-like
activity. In the 60s, while an undergraduate at the univ, i was doing a
large number of os/360 enhancements ... and prior to unbundling, they
would cycle new SEs thru the univ. every six months (that I would get to
train).

HONE started out creating a number of cp67 virtual machine datacenters
that would support branch office system engineers running (guest)
operating systems. The cp67 virtual machine systems (running on real
360/67s) included enhancements supporting simulation of the newly
announced (pre-virtual memory) 370 instructions (allowing newer guest
operating system versions to be run).

The science center 
http://www.garlic.com/~lynn/subtopic.html#545tech

had also ported apl\360 to cms\apl and reworked it for operation in a
virtual memory environment (including arbitrarily large workspaces, up
to 16mbytes, rather than the somewhat toy apl\360 workspaces that were
typically 16kbytes to 32kbytes).

CMS\APL on HONE was leveraged to also deploy a large number of sales and
marketing support applications. These applications soon dominated HONE
utilization and the original HONE purpose somewhat withered away. For
example, by the mid-70s, it was no longer possible to submit a customer
order that hadn't first been processed by HONE configurators and/or
other applications (although by this time, HONE had migrated to VM370
and APL\CMS).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Crypto Related Posts

2007-12-03 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Patrick O'Keefe) writes:
 We've been moving towards secure data transmissions with business 
 partners (with testing starting in about a week) but nobody had
 checked what we could do to lessen the effect  encryption would 
 have on our processors.

the internal network 
http://www.garlic.com/~lynn/subnetwork.html#internalnet

required encryption on all traffic that left corporate facility (like
inter-site traffic). when most places were doing 9.6kbit, we were
working on full-duplex T1 ... and looking at associated problems in HSDT
project
http://www.garlic.com/~lynn/subnetwork.html#hsdt

old email reference looking at problem supporting full duplex T1 on 3081
http://www.garlic.com/~lynn/2006n.html#email841115

in this post discussing encryption technology
http://www.garlic.com/~lynn/2006n.html#36 The very first text editor

part of the effort was looking at being able to move past hardware link
encryptors as solution to the opportunities (which was the default
state-of-the-art at the time).

other old crypto related email
http://www.garlic.com/~lynn/lhwemail.html#crypto

included old email reference to a non-certificate-based public key
infrastructure for more secure communication over the network
http://www.garlic.com/~lynn/2006w.html#email810515

in this post:
http://www.garlic.com/~lynn/2006w.html#12 more secure communication over the 
network

other recent posts to discontinuity between the hsdt project (doing T1 and
faster interconnects) while other operations were still looking at 9.6kbit
with a little more to (high-speed) 56kbit
http://www.garlic.com/~lynn/2007q.html#45 Are there tasks that don't play by 
WLM's rules

for other old topic drift ... we had been called to come in and consult
with this small client/server startup that wanted to do payments on
their server
http://www.garlic.com/~lynn/subnetwork.html#gateway

and they had this technology they invented called SSL that they wanted
to use as part of the implementation. we had to do some end-to-end
studies looking at how to apply the technology to the business processes
some related posts
http://www.garlic.com/~lynn/subpubkey.html#sslcerts

afterwards we participated in the x9a10 financial standard working group
that in the mid-90s had been given the requirement to preserve the
integrity of the financial infrastructure for all retail payments.
the result was the x9.59 financial standard
http://www.garlic.com/~lynn/x959.html#x959

x9.59 took a slightly different approach, in part because of detailed
end-to-end threat and vulnerability analysis involving retail payments,
basically identifying significant authentication vulnerability which
then led to requirements for hiding transaction information ... as
countermeasure to crooks obtaining the information and being able to
perform fraudulent transactions. the observation was that the
transaction information was needed at a large number of different
processes, potentially occuring over extended period of time ... and as
a result, even if the planet was buried under miles of crypto ...  it
still wouldn't prevent information leakage.

the x9.59 financial standard approach was then to fix the underlying
weakness, lack of strong authentication ... which also then eliminated
needing to hide the transaction information from crooks (since the
information was useless w/o the proper authentication). some of this is
discussed in the posts concerning the naked transaction metaphor
http://www.garlic.com/~lynn/subintegrity.html#payments

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Data Center Theft

2007-12-02 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Timothy Sipples) writes:
 In fairness, the DS6000 is physically relatively small, although I wouldn't
 want to carry one by myself on my bicycle.  The spindles (individual
 drives) are even smaller, but you'd need a number of them to have a RAID
 set and the complete data.  Tough but not impossible.

 I think the IT marketplace is in for a shock when people figure out that
 losing the keys means losing the data.  It isn't like a bank vault where
 you can hire a locksmith to drill some holes over several days.  It's so
 critical to store and manage the encryption keys in a safe, secure,
 recoverable repository.

can you say key escrow? ... this was one of the themes from the key
escrow meetings from the mid-90s. however, there was lot of confusion
about what key escrow met, i.e.  

1) gov. held all keys?
2) institutions holding keys for their own data encryption (as an
availability, business continuity and no-single-point-of-failure)?
3) all kinds of keys?, authentication as well as encryption

1 got lots of bad press including all the swirl around clipper chip
and things like LEAF

3 authentication keys aren't really an availability issue ... and
could violate some basic security principles regarding being able to
associate all activities uniquely with individuals.

with all the bad press ... various key escrow activities sort of just
evaporated

wiki reference:
http://en.wikipedia.org/wiki/Key_escrow

nist references
http://csrc.nist.gov/keyrecovery/

misc. past posts mentioning key escrow
http://www.garlic.com/~lynn/aadsm9.htm#pkcs12 A PKI Question: PKCS11- PKCS12
http://www.garlic.com/~lynn/aadsm16.htm#11 Difference between TCPA-Hardware and 
a smart card (was: example: secure computing kernel needed)
http://www.garlic.com/~lynn/aadsm18.htm#12 dual-use digital signature 
vulnerability
http://www.garlic.com/~lynn/aadsm23.htm#6 PGP master keys
http://www.garlic.com/~lynn/2001c.html#65 Key Recovery System/Product
http://www.garlic.com/~lynn/2001h.html#7 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001i.html#36 Net banking, is it safe???
http://www.garlic.com/~lynn/2001j.html#52 Are client certificates really secure?
http://www.garlic.com/~lynn/2002d.html#39 PKI Implementation
http://www.garlic.com/~lynn/2003j.html#53 public key confusion
http://www.garlic.com/~lynn/2004i.html#12 New Method for Authenticated Public 
Key Exchange without Digital Certificates
http://www.garlic.com/~lynn/2006d.html#39 transputers again was Re: The demise 
of Commodore
http://www.garlic.com/~lynn/2006d.html#40 transputers again was Re: The demise 
of Commodore
http://www.garlic.com/~lynn/2007c.html#1 Decoding the encryption puzzle

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Why isn't OMVS command integrated with ISPF?

2007-11-29 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Paul Gilmartin) writes:
 Anyone who believe that's a fundamental limitation of 3270
 hardware that can't be worked around:

 o Has never used VM/CMS

 o Has been brainwashed by TSO

 ... probably both.  On CMS, I can type input to my program, while
 it runs, in anticipation of a VM READ.  I can type immediate commands
 to my Rexx EXEC to turn on tracing with no ATTN nor need to wait for
 in input prompt.

we actually did some hardware mods to 3277 to eliminate race condition
if you happened to type at the instant the system wrote to the terminal
(which would lock the keyboard) ... aka 327x being half-duplex
infrastructure.

we complained about the change-over to 3274 controller with 3278
terminal (i.e. effectively terminal manufacturing cost reduction moving
a lot of components back into shared controller). having shared
electronics back in 3274 controller made 3278 terminal operations
(including response) a lot slower. complaining about it basically got a
response was that the significant hardware slowdown effectively wasn't
noticeable since mvs (tso) was so slow anyway that it wasn't
noticeable.

post with old 3272/3274 comparisons
http://www.garlic.com/~lynn/2001m.html#19 3270 protocol

difference als shows up later with terminal emulation and the difference
between file download with ANR (i.e. 3272/3277) and DCA (i.e.
3274/3278) protocols (anr three times dca thruput)

lots of past posts mentioning terminal emulation
http://www.garlic.com/~lynn/subnetwork.html#emulation

for some total topic drift ... old email mentioning tso product manager
asking me if i would consider doing version of my resource manager for
mvs/tso operation (this was after marketing division decided to start
marketing CMS as the corporations strategic interactive product)
http://www.garlic.com/~lynn/2006b.html#email800310
http://www.garlic.com/~lynn/2006v.html#email800310b

reference in these posts
http://www.garlic.com/~lynn/2006b.html#39 another blast from the past
http://www.garlic.com/~lynn/2006v.html#23 Ranking of non-IBM mainframe builders?

in some sense, CMS provided interactive personal computing in 60s, 70s
and some part of the 80s ... but then saw personal computing starting to
shift to PCs.

for other folklore topic drift, cern did a report at share circa '74
about tso/cms bakeoff. internally within the company, copies of the
report were classified confidential - restricted (i.e. available on
need-to-know only) ... aka while they couldn't restrict its availability
to customers ... they could restrict its availability to people in
marketing and product development.

... and courtesy of the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

virtual machines, cp67, cms (originally stood for cambridge monitor
system before renamed to conversational monitor system as part of vm370
morph), gml (invented in '69 at the science center) precursor to sgml,
html, xml, etc
http://www.garlic.com/~lynn/subtopic.html#sgml

and internal network technology
http://www.garlic.com/~lynn/subnetwork.html#internalnet
also used in bitnet/earn
http://www.garlic.com/~lynn/subnetwork.html#bitnet

here is reference discussing transformation from sgml to html at cern
http://infomesh.net/html/history/early

and first webserver outside europe was on slac vm370 system: 
http://www.slac.stanford.edu/history/earlyweb/history.shtml

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMF Under VM

2007-11-27 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Kopischke, David G.) writes:
From what I understand, we just use canned processes to extract
 SMF, load databases and create reports. But since we don't use VM
 at home and have no experience with it, maybe we're just not
 understanding where this data is in that environment ??? Is there
 any documentation that specifies what SMF data is available under
 VM and what is not ??? With respect to %CPU BUSY, I understand it's
 virtual under VM, but there still has to be some method of gauging
 how much CPU a guest is using, isn't there ??? How do VM shops
 report this ???

for decades VM would account for processor useage (both virtual and
total) which would turned out to correspond very closely with
total/actual busy (which was also measured).

other infrastructures have tended to have accounted for processor busy
which has been less than total/actual cpu busy (measured by other
methods). The difference (which has peridically been quite
substantial) was frequently referred to as capture ratio ... aka the
sometimes small percentage of cpu busy that was actually accounted for

for some, the concept of capture ratio took quite a bit of time to
sink thru ... since a system not accounting for all cpu useage was quite
foreign concept.

running under VM ... one possible way for handling the (captured
... at least by vm) non-virtual processing time might be handled along
with all the other uncaptured processor time (from the standpoint of a
guest operating system running in a virtual machine).

some of this also has to be handled with LPARs w/o VM software ... since
LPARs are essentially a stripped down VM subset moved into the microcode
of the machine (and then you can have virtual guests running in a VM
software virtual machine ... which, in turn might be running in a LPAR
virtual machine ... which is finally running on the real hardware).

a few results for quicky search engine use for term cature ratio 
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10526
http://publib.boulder.ibm.com/tividd/td/TDS390/SH19-6818-08/en_US/HTML/DRLM9mst48.htm
http://www.ibm.com/developerworks/wikis/display/zosperfinstr/Controlling+SMF+Record+Production
http://www.ibm.com/developerworks/websphere/library/techarticles/0407_garza/0407_garza.html
http://www.cmg.org/measureit/issues/mit38/m_38_10.html

the original cp67 system delivered to the univ. the last week of jan68
did have something slightly reminiscence of uncaptured ... which was
actual captured (i.e. specifically measured processor time) that
wasn't associated with any specific operation (called overhead). This
would increase significantly as the number of concurrent processes
increased (aka it scaled extremely poorly). I completely reworked that
implementation to eliminate the non-scaling characteristic ... as well
as being able to account for what was actually being done.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Running REXX program in a batch job

2007-11-18 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to alt.folklore.computers as well.


[EMAIL PROTECTED] (Paul Gilmartin) writes:
 I believe Cowlishaw's book reports that Rexx was developed in the VM
 and MVS environments concurrently.  It flourished in the former and
 withered in the latter, less likely because CLIST fulfilled the need
 better than EXEC2 than because less enthusiasm for innovation exists
 in the MVS environment (case in point: TCP/IP).  Rexx didn't resurface
 under MVS until TSO/E.

well, not quite:

posting of old reference/quote 
http://www.garlic.com/~lynn/2005j.html#41 TSO replacement

from Melinda's vm history paper
http://www.princeton.edu/~melinda/25paper.pdf

one of the quotes:

Mike Cowlishaw had made the decision to write a new CMS executor on
March 20, 1979. two months later, he began circulating the first
implementation of the new language, which was then called ``REX''.  Once
Mike made REX available over VNET, users spontaneously formed the REX
Language Committee, which Mike consulted before making further
enhancements to the language. He was deluged with feedback from REX
users, to the extent of about 350 mail files a day. By consulting with
the Committee to decide which of the suggestions should be implemented,
he rather quickly created a monumentally successful piece of software.

... snip ...

similar comments are mentioned in rexx wiki page
http://en.wikipedia.org/wiki/REXX

and another old quote from this post
http://www.garlic.com/~lynn/2006p.html#31 25th Anniversary of the Personal 
Computer

from one of the references in the above:

By far the most important influence on the development of Rexx was the
availability of the IBM electronic network, called VNET. In 1979, more
than three hundred of IBM's mainframe computers, mostly running the
Virtual Machine/370 (VM) operating system, were linked by VNET. This
store-and-forward network allowed very rapid exchange of messages (chat)
and e-mail, and reliable distribution of software. It made it possible
to design, develop, and distribute Rexx and its first implementation
from one country (the UK) even though most of its users were five to
eight time zones distant, in the USA.

... snip ...

and for other topic drift, this was part of the thread related to the
internal network 
http://www.garlic.com/~lynn/subnetwork.html#internalnet

having been larger than the arpanet/internet from just about the
beginning until sometime mid-85. part of the internal network issues was
that while there were mvs/jes2 nodes, nearly all the nodes were vm
... and the jes2 nodes had to be carefully regulated.

there were several issues around why jes2 nodes had to be carefully
regulated on the internal network (some independent of the fact that the
number of vm systems were significantly larger than the number of mvs
systems)

1) jes2 networking started out being some HASP mods from TUCC that
defined network nodes using the HASP psuedo device table ... limited to
255 entries ... 60-80 entries nominally taken up by psuedo spool devices
... leaving possibly only 170 entries for network node definitions

2) jes2 implementation would discard traffic if it didn't have either
the origin or destination node in local defintion. the internal network
had more nodes than jes2 could define for the majority of its lifetime
... so jes2 needed to be restricted to boundary nodes (at least not
discarding traffic just passing thru).

3) jes2 implementation had a number of other deficiencies, including
having confused header information as to network specific and local
process handling. different versions or releases with minor variation in
headers would bring down whole mvs system. even restricted to purely
boundary nodes, there is infamous story of jes2 upgrade in san jose
resulted in mvs system crashes in hursley. as a consequence there was
special vm drivers created for talking to mvs jes2 systems ... which
would convert jes2 headers to compatible format for the specific system
on the other end of the line. this was somewhat the side-effect of the
vm implementation having separated networking control information from
other types of information ... effectively providing a kind of gateway
implementation ... something not possible in the JES2 networking
infrastructure (including not having a way from protecting itself from
other JES2 systems, requiring intermediary vm systems to keep the JES2
systems from crashing each other).

misc. past posts mentioning hasp, jes2, and/or jes2 networking
http://www.garlic.com/~lynn/subtopic.html#hasp

at some point ... while VM could run native protocol drivers as well as
(multiple different) JES2 drivers ... JES2 could only run a specific
JES2 drivers ... it was decided to start shipping VM only with JES2
drivers (even tho the native VM protocol drivers were more efficient).
this was seen in the bitnet deployment
http://www.garlic.com/~lynn/subnetwork.html#bitnet

the vm tcp/ip product was developed 

Re: CSA 'above the bar'

2007-11-14 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Shmuel Metz  , Seymour J.) writes:
 PSA is real address 0; it's absolute address 0 for at most one of the
 processors in the complex. Neither real nor absolute addresses are virtual
 addresses, and the mapping of virtual 0 to real 0[1] is strictly a
 software convention.

multiprocessor support required a unique PSA for every processor.

in 360 multiprocessor, the prefix register (for every processor)
contained the real address of the PSA for that processor; different
processors chose different real addresses for their PSA ... so as to
have a unique PSA for every processor in the complex. the real, real
page zero was no longer addressable (assuming every processor chose some
other real address than zero).

this was modified for 370, the prefix register specified the real
address of the PSA for that processor (as in 360) ... however, if the
processor addressed the address in the prefix register, it would
reverse translate to real page zero. as a result, the real, real page
zero could be used as common communication area between all processors
in the complex ...  and was addressed by using the address in the
processor's prefix register.

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/3.7?DT=20040504121320

from above:

1.  Bits 0-50 of the address, if all zeros, are replaced with bits 0-50
of the prefix.

2. Bits 0-50 of the address, if equal to bits 0-50 of the prefix, are
replaced with zeros.

3. Bits 0-50 of the address, if not all zeros and not equal to bits 0-50
of the prefix, remain unchanged.

... snip ...

#1  #3 was how things operated in 360 multiprocessor support; #2 was
introduced with 370 multiprocessor support (modulo 360  370 were 4kbyte
pages and above description is for 64bit Z and 8kbyte pages)

past posts mentioning multiprocessor support and/or
compareswap instruction
http:/www.garlic.com/~lynn/subtopic.html#smp

other recent posts in this thread:
http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar'
http://www.garlic.com/~lynn/2007r.html#62 CSA 'above the bar'
http://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar'
http://www.garlic.com/~lynn/2007r.html#65 CSA 'above the bar'
http://www.garlic.com/~lynn/2007r.html#67 CSA 'above the bar'
http://www.garlic.com/~lynn/2007r.html#69 CSA 'above the bar'

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: [ClassicMainframes] multics source is now open

2007-11-12 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Farley, Peter x23353) writes:
 Thanks a lot for the info and the link.  Most interesting.  Another
 important piece of computer history available to the world at large.  Bravo
 to Bull for releasing it.

 It would be an interesting project to write the emulator for that machine
 architecture.

for even more topic drift ... recent RDBMS related post ... drifting
into mentioning that Multics shipped the first RDBMS product, MRDS
http://www.garlic.com/~lynn/2007s.html#20 Ellison Looks Back As Oracle Turns 30
and even further drift in followup
http://www.garlic.com/~lynn/2007s.html#21 Ellison Looks Back As Oracle Turns 30

also mentions that Multics unbundled MRDS ... which created issues
about not allowing base system (free) software to have dependencies
on priced software.

lots of past posts mentioning RDBMS
http://www.garlic.com/~lynn/subtopic.html#systemr

I ran into a similar problem when my resource manager was selected to be
guinea pig for starting to charge for kernel software. I had a lot of
kernel restructuring (in the resource manager) for multiprocessor
operation.  This created a problem for bundled/free multiprocessor
operation needed lots of code from the resource manager.
http://www.garlic.com/~lynn/subtopic.html#unbundle

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ATMs

2007-11-09 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (R.S.) writes:
 Yes, I can. AFAIK z/OS version is not popular one. I know *big* ATM
 installation which  migrated from z/OS to NonStop. People from ACI
 claimed that most of their installtions are not on mainframe.

 Timothy: I like mainframes, I have personal interest in mainframe
 business growth (at least survive), but I see no reason to be
 unhonest.

a reference from hp/nonstop 

ACI’s BASE24 on the NonStop server hits 40 billion transaction mark
http://www.hp.com/products1/24x7/strategic/aci.html


disclaimer ... we did some marketing against them when we were doing our
ha/cmp product ... misc. past posts
http://www.garlic.com/~lynn/subtopic.html#hacmp

for even more drift, latest newsletter
http://www.tandemworld.net/newsletter%20nov07.htm

and in later life, even worked on some joint projects with ACI.

for instance AADS work
http://www.garlic.com/~lynn/x959.html#aads

nacha AADS rfi (submitted on our behalf, since were weren't
nacach members)
http://www.garlic.com/~lynn/nacharfi.htm

involved modifying pre-auth capability in the EFT (debit) network switch

pilot results
http://internetcouncil.nacha.org/docs/ISAP_Pilot/ISAPresultsDocument-Final-2.PDF

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ATMs

2007-11-09 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Timothy Sipples) writes:
 It's an interesting bit of history that the first Tandem machine wasn't
 available until 1976, well after the first electronic ATM (1967) and lots
 of other ATMs.  From what I've read the first networked ATM appeared in
 1968, and the first popular ATM (i.e. same model placed into service by
 more than one bank) was the IBM 2984 starting in 1973.  The IBM 2984
 offered variable cash withdrawals and instantly deducted from your account,
 so it was 100% on-line -- 34 years ago.  (I remember my father using our
 local bank's first ATM, newly installed, when I was a young child.  It
 seemed like magic.)  Presumably most if not all of these ATMs connected to
 IBM System/360s and /370s.  Tandem came along after almost a decade of
 ATMs.

re:
http://www.garlic.com/~lynn/2007s.html#6 ATMs

early work was done at los gatos lab ... before i was spending any time
there. however, i do remember people talking about having worked on the
development. they had large supply of bills from numerous different
countries ... which they kept in a locked vault in the basement (for
testing with the machines during development). they also mentioned story
about one of the early machines going in across the street from a fast
food resturant and kids feeding condiment packets into the card slot
(one of the early bug fixes was countermeasure for such an attack).

old posts reference 2984
http://www.garlic.com/~lynn/2006q.html#5 Materiel and graft
http://www.garlic.com/~lynn/2006u.html#40 New attacks on the financial PIN 
processing
http://www.garlic.com/~lynn/2006x.html#9 Plurals and language confusion
http://www.garlic.com/~lynn/2007l.html#47 My Dream PC -- Chip-Based

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Poster of computer hardware events?

2007-11-09 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (, IBM Mainframe Discussion List) writes:
 I made a mistake.  A track not in the cache would take on the  order of 20 
 milliseconds, so that would equate to 20 days instead of one  day.  A track 
 already cached would result in an access time of one  millisecond.  If the 4K 
 block can be found in a buffer somewhere in virtual  storage inside the 
 processor, 
 it might take from 100 to 1000 instructions to  find and access that data, 
 which would equate to 100 to 1000 seconds, or roughly  one to 17 minutes.  
 And 
 that assumes that the page containing the 4K block  of data can be accessed 
 without a page fault resulting in a page-in operation  (another I/O), in 
 which 
 case we are back to several days to do the I/O.
  
 By the way, it takes at least 5000 instructions in z/OS to start and finish  
 one I/O operation, so you can add about two hours of overhead to  perform the 
 I/O that lasts for 20 days.
  
 You really want to avoid doing an I/O if at all possible.

reply to comment about RPS-miss (in the vmesa-l flavor of this thread)
http://www.garlic.com/~lynn/2007s.html#5 Poster of computer hardware events?

i had been making comments over a period of yrs that disk relative
system thruput had declined by an order of magnitude (i.e. disks were
getting faster but processors were getting much faster, faster).  this
eventually led to somebody in the disk division (gpd) to assigning the
gpd performance group to refute the statements. after several weeks they
came back and effectively said that i had somewhat understated the disk
relative system thruput degradation ... when RPS-miss was taken into
account.

they then put a somewhat more positive spin on it and turned it
into share 63 presentation b874 ... some past references:
http://www.garlic.com/~lynn/2002i.html#18 AS/400 and MVS - clarification please
http://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s
http://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)

one of the issues is does the 5k instruction pathlength roundtrip from
EXCP (including channel program translation overhead) or roundtrip just
after it has been passed to i/o supervisor???

for comparison numbers ... i had gotten cp67 total roundtrip for page
fault down to approx. 500 instructions ... this included page fault
handling, page replacement algorithm, a prorated fraction of page i/o
write pathlength (which includes everything to start/finish i/o), total
page i/o read pathlength (including full i/o supervisor), and two task
switches thru dispatcher (one to switch to somebody else, waiting on the
page fault to finish and another to switch back after the page i/o read
finishes). to get it to 500 instructions involved touch almost every
piece of code involved in all of the operations.

I believe the 5000 instruction number was one of the reasons that
3090 extended store was a syncronous instruction (since the asyncronous
overhead and all related gorp in mvs was so large).

earlier, there had been some number of electronic 2305 paging device
deployed at internal datacenters ... referred to as 1655 model (from
an outside vendor). these involved effectively low latency but limited
to channel transfer and cost whatever the asyncronous processing
overhead.

the 3090 extended store was done because of physical packaging issues
... but later when physical packaging was no longer an issue ... there
were periodic discussions about configuring portions of regular memory
as simulated extended store ... to compensate for various shortcomings
in page replacement algorithms.

with regard to the cp67 500 instruction number vis-a-vis MVS ... i
would periodically take some heat regarding MVS having much more robust
error recovery as part of the 5000 number (even tho the 500 number was
doing significantly more). so later when i was getting to play in bldgs
14  15 (dasd engineering lab and dasd product test lab), i had
opportunity to rewrite vm370 i/o supervisor. the labs in bldg. 1415
were running processor stand-alone testing for the dasd/controller
testcells (one at a time). They had tried doing this under MVS but had
experienced 15min MTBF (system crashing and/or hanging with just a
single testcell). I undertook to completely rewrite i/o supervisor to
make it absolutely bullet proof, allow concurrent testcell operation
in operating system environment. lots of past posts mentioning
getting to play disk engineer
http://www.garlic.com/~lynn/subtopic.html#disk

some old postings about comparisons of degradation of disk relative
system thruput. the claim was that doing similar type of cms workload
... in going from cp67 on 360/67 with 80 users to vm370 on 3081 ... it
should have shown an increase to several thousand online uses
... instead of increase to 300 or so online users. The increase in
online users is roughly the 

Re: Real storage usage - a quick question

2007-11-08 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] (Knutson, Sam) writes:
 You should have the PTFs for z/OS APAR OA17114 installed if you are
 using paged fixed buffers in DB2 V8.   Not having it was one of the
 causes of a z/OS outage here when a DB2 DBA accidently overcommitted
 storage to DB2.

aka application page fixed buffers ... allows applications to specify
the real addresses in the channel program ... avoiding the dynamic
channel program translation (creating a duplicate of the channel program
passed by excp/svc0) and dynamic page fixing that otherwise has to occur
on every i/o operations (however, it can eliminate pageable storage
needed by the rest of system)

recent post mentioning difference between EXCP and EXCPVR (vis-a-vis
channel program translation)
http://www.garlic.com/~lynn/2007q.html#8 GETMAIN/FREEMAIN and virtual storage 
backing up

other recent posts discussing dynamic channel program translation (in
the initial translation from MVT to OS/VS2 supporting virtual memory,
there was extensive borrowing of technology from cp67 CCWTRANS, channel
program translation)
http://www.garlic.com/~lynn/2007e.html#19 Cycles per ASM instruction
http://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems 
history
http://www.garlic.com/~lynn/2007e.html#46 FBA rant
http://www.garlic.com/~lynn/2007f.html#0 FBA rant
http://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems 
history
http://www.garlic.com/~lynn/2007f.html#33 Historical curiosity question
http://www.garlic.com/~lynn/2007f.html#34 Historical curiosity question
http://www.garlic.com/~lynn/2007k.html#26 user level TCP implementation
http://www.garlic.com/~lynn/2007n.html#35 IBM obsoleting mainframe hardware
http://www.garlic.com/~lynn/2007o.html#37 Each CPU usage
http://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation
http://www.garlic.com/~lynn/2007p.html#69 GETMAIN/FREEMAIN and virtual storage 
backing up
http://www.garlic.com/~lynn/2007p.html#70 GETMAIN/FREEMAIN and virtual storage 
backing up
http://www.garlic.com/~lynn/2007p.html#72 A question for the Wheelers - 
Diagnose instruction
http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar'

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Real storage usage - a quick question

2007-11-07 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Veilleux, Jon L) writes:
 In z/OS 1.8 the memory management is much more conducive to large
 memory. They no longer use the least recently used algorithm and no
 longer check every page. This has made a big difference for us. Under
 1.7 we had issues with large real memory sizes due to the constant
 checking by RSM. This is no longer the case and we have increased our
 memory dramatically with no performance hit.

one of the things found in clock LRU-approximation that i had
originally done as undergraduate in the 60s
http://www.garlic.com/~lynn/subtopic.html#wsclock

was that if the interval between page resets started to exceed some
limit, then there was little differention benefit of the reset activity
... least recently used tends to have some implicit dependencies on
amount of history ... if the duration is too long ... then it lost
much of its correlation being able to differentate between pages as to
future page reference pattern.

however across a wide range of configurations and workloads in the 70s,
clock LRU-approximation had the advantage of effectively being able to
(usefully) dynamically adapt the interval. however with a lot of cp67
experimenting and also heavy use of storage reference traces and page
replacement modeling ... it was possible to show that outside some
useful operating range ... the use of LRU algorithms for
differentiating/predicting future page reference behavior became less
and less accurate. It was also possible to show that for very large
memories ... that the overhead of repeatedly resetting page reference
bits provided less benefit than any possible improvement in page
replacement strategy.

we did do some experimenting at the science center attempting to
recognize the operating region/environment across where clock
LRU-approximated was beneficial ... and attempt to take some secondary
measures/strategies when it was outside that operating
region/envrionment.

one of the scenarios was that most LRU-approximation algorithms are
measured against how well they performed vis-a-vis simulation that
exactly implemented least-recently-used page ordering (measured in terms
of total page faults for given workload and real storage size).  Good
approximations tended to come within 5-15 percent (total page faults) of
real least-recently-used page ordering. We were able to find some page
replacement variations that instead of being 5-15 percent worse/more
(total page faults compared to simulated real least-recently-used page
ordering), we were able to show 5-15 percent fewer total page faults.

the scenario was that in some configuration/workload scenarios,
LRU-approximate could effectively cycle thru every page in real storage
w/o finding a candidate ... and then take the first page it started
with. Besides having a lot of processing overhead, this characteristic
effectively degraded to FIFO page replacement (there are operating
regions for LRU where it can degenerate to FIFO page replacement at the
same time taking an extrodinary amount of processor overhead). our
variation tended to recognize when operating in this
configuration/workload region and effectively switched to RANDOM page
replacement at very low processor overhead (and modeling showed that
when not able to make any other differentiation between pages to be
replaced ... RANDOM replacement makes better choice than FIFO,
independent of the overhead issue).

In fact, the original cp67 delivered at the univ. last week jan68,
... also referenced here
http://www.garlic.com/~lynn/2007r.html#74 System 360 EBCDIC vs. ASCII

... effectively implemented something that tended to operate as FIFO
replacement with purely software and didn't make use of the hardware
reference bits. As undergraduate, I did the kernel algorithm and
software changes to implement clock LRU-approximation page replacement
... taking advantage of page replacement bits. In this scenario ...
with only on the order of 120 real pageable pages ... this reduced the
time spent in page replacement selection (under relatively heavy load)
from approx. 10 percent of total processor to effectively unmeasureable
(and at the same time drastically improvement the quality of the
replacement choice).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: System 360 EBCDIC vs. ASCII

2007-11-07 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Timothy Sipples) writes:
 An awful lot of modems and serial connections had to handle 7-bit,
 too, complicating the user experience for dial-up access to host
 systems, BBSes, etc.  Basically if you set your modem to 7 bits, you
 struggled to transfer binary files (see: Kermit), and PC extensions
 for things like line drawing characters looked like a jumbled mess.
 If you set your modem to 8 bits you usually lost the parity bit, so
 you lost what little error checking you had.  And a lot of systems
 still tried to use that high order bit for parity, so you saw a
 jumbled mess on your PC again.  Owners of modem dial-up pools
 installed workarounds to try to detect what the end user had set, but
 this was a mess, too.  On some systems you wouldn't see anything, so
 you didn't know what to do.  (The correct answer: hit Enter a few
 times, or maybe Escape, or)  I'm sure ATT enjoyed some extra
 earnings as dial-up modem users had to call over and over again,
 hoping to get the configuration settings right through trial and
 error, all because of the complications of 7 versus 8 bits.  This
 affected all sorts of serial connections, including hardwired ones:
 plotters, ASCII terminals, etc.

when cp67 was installed at the univ the last week of jan68, it had
terminal support for 1052s and 2741s ... but the univ. had some number
of tty/ascii devices. so one of the modifications to cp67 was to add
tty/ascii terminal support.

the base cp67 code had some stuff for dynamically determining the
terminal type and switching the 2702 line scanner using the SAD
command. so to remain consistent, i worked out a process to add
TTY/ascii terminal support ... preserving the base cp67 dynamic terminal
type determination. the univ. also was getting dial-up interface ...
with base number that would roll-over to the first unused line.  the
idea that all terminals could dial in on the same phone number,
regardless of type.

this almost worked ... but it turned out that they had taken some
short cuts with 2702 implementation. the issue was that while SAD
command would switch the line scanner ... but the short-cut was that the
line-speed oscillator was hard-wired to each port. for hard-wired lines
... the appropriate terminal types was connected to the appropriate 2702
with the corresponding line-speed wired (and then cp67 could dynamically
determine the correct terminal type and switch the line scanner as
needed with the SAD command). However, this wouldn't work for dial-up
lines with common dial-in pool ... where any terminal type might get
connected to any 2702 port.

so somewhat because of this, the univ. decided to build our own clone
controller that would also be able to perform dynamic line-speed
determination. this involved reverse engineering the 360/67 multiplexor
channel interface and building a channel interface board for an
Interdata/3 minicomputer (platform for implemented controller clone).
misc. past posts about the clone controller project
http://www.garlic.com/~lynn/subtopic.html#360pcm

i remember two bugs from the project.

one bug involved red-lighting the 360/67. the 360/67 had
high-resolution timer that tic'ed at approx 13mseconds. the timer had to
update loc. 80 storage when it tic'ed. If the timer tic'ed a 2nd time
before the previous tic had been updated in storage (say because some
channel/controller had obtained the storage bus for the period and
failed to release it for that perioid), the timer would force a
red-light/machine check.

the other bug was initially getting ascii data into storage ..  after
running it thru standard ascii-ebcdic translation table, it was all
garbage. we eventually figured out every byte was bit-reversed ...
i.e. 2702 line-scanner would take leading bit off the line and store it
in low-order bit position (in a byte ... reversing the order of bits off
the line. the interdata/3 started out doing standard ascii taking
leading bit off the line and storing it in the high-order bit in a byte.
so initially, the ascii bytes was getting to 360/67 main memory in
non-bit-reversed bytes and then being run through the standard 2702
ascii-ebcdic (bit-reversed) translation table.

this project got written up as the four of us being instrumental in
starting the clone controller business.

of course, all the clone controller business was the major motivation
for the future system project ... lots of past posts
http://www.garlic.com/~lynn/subtopic.html#futuresys

including a few with this reference
http://www.ecole.org/Crisis_and_change_1995_1.htm

from above:

IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead
that the competition would never be able to keep up, and to have such
a high level of integration that it would be impossible for
competitors to follow 

Re: CSA 'above the bar'

2007-11-06 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to alt.folklore.computers as well.


[EMAIL PROTECTED] (Van Dalsen, Herbie) writes:
 Someone wants to create a shared block of memory CSA/not and share it
 between programs. My understanding is that a 24-bit program can
 address 24-bit addresses, 31-bit, 64-bit... So in my inexperienced
 mind the 24bit program could never share in the happiness of this
 above the bar heaven of shared storage.

as i mentioned in this post 
http://www.garlic.com/~lynn/2007r.html#62 CSA 'above the bar'

... the way that i originally did sharing implementation and mmap
support
http://www.garlic.com/~lynn/subtopic.html#mmap

was that the same shared object wasn't required to occupy the same
virtual address in every virtual address space. however, it could
represent a challenge when program images with relocatable address
constants were involved 
http://www.garlic.com/~lynn/subtopic.html#adcon

there would still be an issue of the amount of happiness (available in
24bit mode) as opposed to any happiness.

it would create a problem for processors that had virtual caches ...
i.e. cache lines indexed by virtual address ... resulting in
synonyms/duplicates in the cache when the same object was addressed by
different virtual addresses.

here is old email discussing dual index 3090 D-cache
http://www.garlic.com/~lynn/2003j.html#email831118

in this post
http://www.garlic.com/~lynn/2003j.html#42 Flash 10208

other posts about virtual cache
http://www.garlic.com/~lynn/2006u.html#37 To RISC or not to RISC
http://www.garlic.com/~lynn/2006v.html#6 Reasons for the big paradigm switch
http://www.garlic.com/~lynn/2006w.html#17 Cache, TLB, and OS

one of the other issues for TLB (hardware that translates virtual page
addresses to real page addresses) ... all the entries were
tagged/associated with specific virtual address spaces
... i.e. STO-associative.  This generalized mechanism resulted in a
huge number of duplicated entries CSA/common-segment. So as a special
case optimization for the whole MVS CSA/common-segment hack gorp ... a
special option was provided that identified virtual addresses as
something belonging to common-segment. These areas then became
associated in the TLB with effectively a system-wide, unique, artificial
common-segment virtual address space (effectively violating the whole
generalized virtual address space architecture ... rather than
associated with generalized virtual address space ... it became
associated with a custom operating system specific construct that was
known to have very specific characteristics).

past post in this thread discussing rise of the whole ugly common
segment gorp
http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar'

other posts in this thread
http://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar'
http://www.garlic.com/~lynn/2007r.html#65 CSA 'above the bar'

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: High order bit in 31/24 bit address

2007-11-06 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to alt.folklore.computers as well.

[EMAIL PROTECTED] (Steve Samson) writes:
 As for 32-bit mode (TSS) I don't have a POPS for that architecture but
 I suspect the HO bit is treated as any other. TSS did not use the
 sign bit as a signal, just as an address bit.

lots of 360 documents at bitsavers:
http://bitsavers.org/pdf/ibm/360/

including various functional characteristics
http://bitsavers.org/pdf/ibm/360/funcChar/

specifically 360/67 functional characteristics a27-2719-0
http://bitsavers.org/pdf/ibm/360/funcChar/A27-2719-0_360-67_funcChar.pdf
and ga27-2719-2
http://bitsavers.org/pdf/ibm/360/funcChar/GA27-2719-2_360-67_funcChar.pdf

which has a lot of the gory details.

as somewhat referenced here ... 360/67 was originally intended for use
by tss/360 ... but for a whole variety of reasons, most of them ran
cp67 (or in straight 360/65 mode with mvt w/o using virtual
memory hardware)
http://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar'
curtesy of science center
http://www.garlic.com/~lynn/subtopic.html#545tech


in any case, psw format, pg. 15

bit meaning
0-3 spare (must be 0)
4   24-32 bit address mode
5   translation control
6   i/o mask (summary)
7   external mask (summary)
8-11protection key
12  ascii-8 mode
13  machine check mask
14  wait state
15  problem state
16-17   instruction length code
18-19   condition code
20-23   program mask
24-31   spare
32-63   instruction address

...

there were a quite a few of the machines used internally. 

one of the projects were adding 370 virtual machine option to cp67
simulation ... this was having cp67 simulate the new instructions added
to 370 (prior to announcement of 370 virtual memory).

one of the places that deployed numerous of these machines was
in the field/data processing/sales division for a project
called HONE
http://www.garlic.com/~lynn/subtopic.html#hone

for hands-on network environment ... the idea was that in the wake
of 23jun69 unbundling announcement
http://www.garlic.com/~lynn/subtopic.html#unbundle

that SEs in the branch office could get operating system hands-on
experience with (370) systems running in cp67 (370) virtual machines.

however, the science center had also ported apl\360 to cms for cms\apl
and done a lot of work enhancing it to operate in large virtual memory
environment (most apl\360 was limited to 16k workspaces, hardly adequate
for many real world problems). With cms\apl, there were lots of new
(internal) apl-based applications developed (some number of them of the
genre that today would be done with spreadsheets) ... including
configurators ... which basically filled out mainframe system orders
for the branch office personal. As the use of these applications grew on
HONE ... eventually they eclipsed the virtual guest hands-on training
and would consume all available resources. at some point in the 70s, it
was not even possible to submit a mainframe order that hadn't been run
thru HONE configurator.

science center had also done quite a bit of work in the area of
sophisticated system performance modeling ... including laying the
groundwork for what would become capacity planning. some of this
i've commented about with regard to calibrating and validating
http://www.garlic.com/~lynn/subtopic.html#benchmark
the release of my resource manager
http://www.garlic.com/~lynn/subtopic.html#fairshare

in addition, a flavor of the performance modeling work was also deployed
on HONE as the (apl based) performance predictor. Branch office people
could submit customer configuration and workload details/characteristics
and then ask what-if questions of the performance predictor ... as
to what would happen if there was configuration and/or workload changes.

another project was doing the cp67 changes to support a full 370 virtual
memory implementation. this had a version cp67 running either in a
360/67 virtual machine (under cp67) or stand-alone real 360/67
simulating virtual machine with full 370 virtual memory operation.  Then
there was a custom version of cp67 that believed it ran on 370 virtual
memory hardware (rather than on 360/67 hardware). This was in regular
production use a year before the first engineering 370 machine with
virtual memory support was operational (and long before announcement).

past posts in the related thread:
http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar'
http://www.garlic.com/~lynn/2007r.html#62 CSA 'above the bar'
http://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar'
http://www.garlic.com/~lynn/2007r.html#65 CSA 'above the bar'
http://www.garlic.com/~lynn/2007r.html#67 CSA 'above the bar'

misc. past posts mentioning performance predictor
http://www.garlic.com/~lynn/2001i.html#46 Withdrawal Announcement 901-218 - No 
More 'small machines'
http://www.garlic.com/~lynn/2002b.html#64 ... the 

Re: CSA 'above the bar'

2007-11-05 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.


[EMAIL PROTECTED] (McKown, John) writes:
 Just as a thought. Could somebody write a subsystem which starts at IPL
 time, does the shared GETMAIN, then (here's the rub) somehow have that
 memory automatically added to every address space which starts
 thereafter? I don't know enough about subsystems. I would guess that it
 would be easier for said subsystem to implement a PC so that a client
 could request access to the shared GCSA (to coin a phrase for it - G for
 Grande, like the HLASM instructions). The PC would set up all the
 difficult parts and return a 64-bit address to the shared memory
 space.

re:
http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar'

i had done something similar, but different in the waning days of cp67
and then ported it to vm370. it was generalized memmap function that
allowed different virtual address spaces to have the same shared memory
object at different addresses.

vm370 started out with a drastic subset of this function that was
cribbed off the virtual IPL command. however, it was dependent on
providing r/o sharing of the same object by segment protection feature
that was part of the original, base 370 virtual memory architecture.

this was one of the features that got dropped when the retrofit of
virtual memory hardware to 370/165 ran into scheduling problems
... could regain six month in schedule if several features were dropped
(and the favorite son operating system in pok claimed that they didn't
find the features really useful).

as a result, this caused all the other processors that already had
implemented full 370 virtual memory architecture to go back and pull the
dropped features. it also forced the vm370 group to significantly redo
their implementation on how to protect shared segments across multiple
different virtual address spaces (effectively a real cludge that had
been used in cp67)

in any case, a drastic subset of my (genealized) memory mapping and
sharing implementation was eventually released as something called
discontiguous shared segments.

lots of past posts mentioning the cms filesystem changes supporting
memory mapping (and page mapped operation)
http://www.garlic.com/~lynn/subtopic.html#mmap

and numerous posts discussing the difficulty that the os/360
relocatable adcon convention represented for allowing sharing
same object in different virtual address spaces at potentially
different virtual addresses
http://www.garlic.com/~lynn/subtopic.html#adcon

while tss/360 had numerous other problems, they at least adopted a
different convention to address relocatable address constant issue for a
shared, virtual memory environment

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CSA 'above the bar'

2007-11-05 Thread Anne Lynn Wheeler
[EMAIL PROTECTED] (Binyamin Dissen) writes:
 Does z/VM use virtual storage?

comment in this thread asking how many times has virtual memory
been reinvented
http://www.garlic.com/~lynn/2007r.html#51 Translation of IBM Basic Assembler to 
C?

some footnotes about the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

from Melinda's paper VM and the VM Community: Past, Present, and Future
http://www.princeton.edu/~melinda

...

What was most significant was that the commitment to virtual memory was
backed with no successful experience. A system of that period that had
implemented virtual memory was the Ferranti Atlas computer, and that was
known not to be working well.  What was frightening is that nobody who
was setting this virtual memory direction at IBM knew why Atlas didn't
work

... snip ...

quoted from L.W. Comeau, CP-40, the Origin of VM/370, Proceedings of
SEAS AM82, September, 1982

and ... 

Creasy had decided to build CP-40 while riding on the MTA. I launched
the effort between Xmas 1964 and year's end, after making the decision
while on an MTA bus from Arlington to Cambridge. It was a Tuesday, I
believe. (R.J. Creasy, private communication, 1989.)

... snip ...

cp40 was built on specially modified 360/40 with virtual memory hardware
... implementing virtual machines. This morphed into cp67 when 360/67
with standard virtual memory became available.

and as per previous post in thread
http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar'
http://www.garlic.com/~lynn/2007r.html#62 CSA 'above the bar'

the initial hack to mvt for os/vs2, in support of 370 virtual memory,
involved borrowing a lot of code from cp67.

lots of the vm370 microcode assists developed during the 70s and early
80s eventually morphed into pr/sm and current day LPARs ...  which is
basically stripped down version of full VM virtual machine function.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CSA 'above the bar'

2007-11-05 Thread Anne Lynn Wheeler
[EMAIL PROTECTED] (Ted MacNEIL) writes:
 That's why there can be a 'double paging' penalty for a LINUX (or
 z/OS, or z/VM, or...).

 z/VM, and its predecessors, has always had the capability to defines
 more storage than is on the box.

 It even has swap files.

i had other problems with the os/vs2 group (initially svs before it
morphed into mvs).

one was all the stuff about LRU replacement algorithms and what it
met. lots of posts on the subject
http://www.garlic.com/~lynn/subtopic.html#wsclock

early on, the pok performance modeling group had discovered on a page
fault that if it selected non-changed pages (for replacement) before
changed pages ... there wouldn't need the overhead of doing a write
before the read. i tried to convince them it would be violated
fundamental tenents of LRU replacement paradigm. It wasn't until well
into MVS releases that somebody pointed out that they were selecting for
replacement, high-use, non-changed, system/shared executable pages,
before (lower use) private application data pages (which were
changed/modified).

another issue isn't just the double paging overhead ... there is the
possibility that a virtual guest is running a LRU-like replacement
algorithm and selecting a real page with a low use virtual page for
replacement (to be refreshed with the missing page). VM may also be
doing LRU-like replacement algorithm and noticed (also) that the guest's
real page (virtual machine virtual page) hadn't been recently used and
selected it for replacement. The pathelogical problem is that the guest
may always be deciding it needs one of its real pages (because the
corresponding virtual pages weren't being used) moments after VM has
decided to remove the corresponding guest virtual machine page from real
storage  aka running a virtual guest's LRU-like replacement
algorithm can violate the premise behind LRU replacement ... since the
guest's real page that corresponds to the guest's least recently used
virtual page has some probability of being the next page that the guest
might actually decide to use

misc. past posts in thread:
http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar'
http://www.garlic.com/~lynn/2007r.html#62 CSA 'above the bar'
http://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar'

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CSA 'above the bar'

2007-11-04 Thread Anne Lynn Wheeler
Steve Samson [EMAIL PROTECTED] writes:
 The discussion suggests that the dead zone represented an arbitrary
 decision. However it is absolutely necessary to preserve compatibility
 with programs dating back to OS/360. If a 24-bit or 31-bit address is
 interpreted as or expanded to a 64-bit address and the high-order bit
 happens to be on, that would cast the virtual address into the 2-4
 gigabyte range and unpredictable effects could ensue.

 Use of the high-order bit in an address to signal the end of a
 parameter list  is common, and no practical means of filtering or
 converting the programs is available.

 I think the dead zone is necessary in z/VSE for the same reason.

 Other operating systems did not use the high order bit in the same
 way, so there is no need for the dead zone in virtual addresses.

 Has this helped to achieve clarity?

360/67 had both 24-bit and 32-bit virtual addressing modes ... as well
as some other things that didn't reappear until xa. there was some
discussion in the xa mode about returning to the 360/67 32-bit mode
vis-a-vis using 31-bit ... which would have been in the architecture
redbook (the discussion i remember was the difference in operation of
things like BXH and BXLE instructions between 31-bit and 32-bit modes)

principles of operation was one of the first major publications done
with cms script ... in large part because it supported conditional so on
the command line ... either the whole architecture redbook could be
printed ... or just the principles of operation subset (w/o all the
additional detail ... it was called redbook because it was distributed
in a 3-ring red binder).

common segment area started out being the MVS solution to moving
subsystems into the own address space ... and the pervasive use of
pointer passing APIs. this was what initially led to MVS kernel image
occupying 8mbytes of every 16mbyte virtual address space (so for
applications making kernel calls ... the kernel could directly access
the parameter list). however, this pointer-passing api paradigm created
significant problems when subsystems were moved into their own address
space (as part of morphing os/vs2 svs to os/vs2 mvs). common segment
could start out as 1mbyte in every address space ... where applications
could squirrel away parameter list ... and then make call to the
subsystem (passing thru the kernel for the address space switch).

the problem was for the larger installations, common segment could grow
to 5-6 mbytes that appeared in every application virtual address space
(with the 8mbyte taken out for the kernel image) that might leave only
2-3mbytes for applications (out of the 16mbytes).

the stop-gap solution in the 3033 time-frame was dual-address space mode
(pending access registers, program call, etc) ... there was still a pass
thru the kernel to switch to a called subsystem ... but the called
subsystem could reach back into the calling application's virtual
address space (w/o being forced to resorting to the common segment
hack).

3033 also introduced a different above the line concept.  the mismatch
between processor thruput and disk thruput was becoming more and more
exacerbated. i once advocated a statement that over a period of a decade
or so, that the disk relative system thruput had declined by an order of
magnitude (or more) ... aka disk thruput increased by 3-4 times while
processor thruput increased by 40-50 times. As a result, real storage
was more and more being used for caching and/or other mechanisms to
compensate for the lagging disk relative system thruput.

we were starting to see clusters of 4341 decked out w/max. storage and
max channel and i/o capacity ... matching or beating 3033 thruput at a
lower price. one of the 4341 cluster benefits was that there was more
aggregate real storage than the 16mbyte limit for 3033. the hack was to
redefine two (undefined/unused) bits in the page table entry.  standard
page table entry had 16 bits, including a 12bit (4k) page number field
(allowed addressing up to 16mbytes real storage). With the two
additional bits, it was possible to address up to 16384 4kbyte pages (up
to 64mbyte of real storage) ... but only 16mbytes at a time.

in real addressing mode ... it was only possible to address the first
16mbytes and in virtual addressing mode ... it was only possible to
address a specific 16mbytes (but it was possible to have more than 4096
total 4kbyte pages, some of which could reside about 16mbyte real).

it was possible to use channel program IDAL to specify address greater
than 16mbyte real address (allowing data to be read/written above the
16mbyte line). however, the actual channel programs were still limited
to residing below the 16mbyte line. some of this was masked by the whole
channel program translation mechanism that was necessary as part of
moving to virtual memory environment. the original transition for mvt
was hacking a little bit of support for a single virtual address space
(i.e. os/vs2 svs) and 

Re: IBM System/3 3277-1

2007-10-28 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to 
comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well.

Rostyslaw J. Lewyckyj [EMAIL PROTECTED] writes:
 If memory hasn't failed me, we read mark sense cards on something that
 was called a 1230. We didn't have one in the computing center. It was
 in a separate laboratory somewhere in the School of Education.
 We sent the decks over there. I don't remember what we got back.
 I think the 1230 may have punched the marked card.

re:
http://www.garlic.com/~lynn/2007q.html#71 IBM System/3  3277-1
http://www.garlic.com/~lynn/2007r.html#2 IBM System/3  3277-1

wiki mark sense page 
http://en.wikipedia.org/wiki/Mark_sense

mentions that 513, 514, 557, and 519 could handle mark sense. also
has pointer to 805 test scoring machine.

513  514 reproducing punches could handle mark sense ... so it is
possible that a 513/514 had preprossed the mark sense student
registration cards ... and the 2540 was only processing the reproduced
punch cards (and i just not paying that much attention).

the wiki reference also has url for 513/514 (pdf) reference manual

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: IBM System/3 3277-1

2007-10-28 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to 
comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well.

Anne  Lynn Wheeler [EMAIL PROTECTED] writes:
 3277 had quite a bit of local intelligence ... it was possible to do
 some custom stuff in the terminal that changed the repeat start-delay
 and repeat ... as well as adding fifo to handle keyboard locking up if
 you happen to be typing when the system went to (re)write something on
 the screen. the move to 3274 controller for 3278/3279/etc terminals ...
 moved all that intelligence back into the controller ... reducing amount
 of electronics and manufacturing costs. with electronics moved back into
 controller ... it also degraded performance and response. 

re:
http://www.garlic.com/~lynn/2007r.html#7 IBM System/3  3277-1
http://www.garlic.com/~lynn/2007r.html#8 IBM System/3  3277-1

somebody picking around in some of the referenced old postings, sent
private email asking about reference to ANR download being 2-3 times
baster than DCA download ... and what was ANR ... other than APPN
Automatic Networking Routing.

ANR was3272/3277 ... vis-a-vis DCA 3274/3278-9. In addition
to DCA having slower human (real terminal) response ... because
so much of the electronics had been moved back into controller,
it also affected later terminal emulation download thruput.

quicky search engine for 3277  anr turns up
http://www.classiccmp.org/pipermail/cctech/2007-September/084640.html

misc. past posts mentioning terminal emulation
http://www.garlic.com/~lynn/subnetwork.html#emulation

as client/server started to proliferate ... the communication
group made various attempts (like SAA) to protect their
terminal emulation install base. when we came up with
3tier/multi-tier architecture ... we took lots of heat from
the sna and saa forces. misc. posts mentioning coming up with
multitier networking architecture
http://www.garlic.com/~lynn/subnetwork.html#3tier

for other drift ... APPN started out as AWP164. For a time,
the person responsible and I used to report to the same
executive. I would periodically chide him that the communication
group didn't appreciate what he was doing and that he should
instead work on real networking (like tcp/ip). In fact, the
communication group non-concurred with announcing APPN. After
some delay and escalation, the announcement letter was carefully
rewritten to not state any connection between APPN and SNA.

of course we were also running hsdt project ... misc. posts
http://www.garlic.com/~lynn/subnetwork.html#hsdt

and recent post illustrating gap between what we
were doing and what the communication group was doing
http://www.garlic.com/~lynn/2007p.html#64

part of the issue was that in early days of SNA ... my wife had
co-authored AWP39 ... peer-to-peer networking ... which the
communication group possibly viewed as competitive with their
communication activity. she was then con'ed into going to pok to be in
charge of loosely-coupled architecture and was frequently battling with
SNA forces that it wasn't appropriate for loosely-coupled operation. She
came up with peer-coupled shared data architecture ... which didn't see
a lot of uptake until sysplex ... except for IMS hot-standby
... misc. past references
http://www.garlic.com/~lynn/subtopic.html#shareddata

recent posts mentioning ASWP39
http://www.garlic.com/~lynn/2007b.html#9 Mainframe vs. Server (Was Just 
another example of mainframe
http://www.garlic.com/~lynn/2007b.html#48 6400 impact printer
http://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now?
http://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 
36 bits?
http://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 
36 bits?
http://www.garlic.com/~lynn/2007l.html#62 Friday musings on the future of 3270 
applications
http://www.garlic.com/~lynn/2007o.html#72 FICON tape drive?
http://www.garlic.com/~lynn/2007p.html#12 JES2 or JES3, Which one is older?
http://www.garlic.com/~lynn/2007p.html#23 Newsweek article--baby boomers and 
computers
http://www.garlic.com/~lynn/2007q.html#46 Are there tasks that don't play by 
WLM's rules

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: IBM System/3 3277-1

2007-10-27 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to 
comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well.

bbreynolds [EMAIL PROTECTED] writes:
 This thread started about the 3277-001 used on a System/3 Model 15
 (would that be a 5415?): as 3277's relied on the 3271/3272/3275 for
 the major portion of their intelligence, I would assume that there
 would have had to been some pretty substantial hardware in the
 System/3 to make the 3277-001 believe it was attached to a
 controller. I can't think how the functions would be split out on a
 3277 not on a controller; unless the 3277-001 was gutted.  Any hint
 if a cable other than a simple coax connected the 3277 to the CPU?

3277 had quite a bit of local intelligence ... it was possible to do
some custom stuff in the terminal that changed the repeat start-delay
and repeat ... as well as adding fifo to handle keyboard locking up if
you happen to be typing when the system went to (re)write something on
the screen. the move to 3274 controller for 3278/3279/etc terminals ...
moved all that intelligence back into the controller ... reducing amount
of electronics and manufacturing costs. with electronics moved back into
controller ... it also degraded performance and response. 

several of us complained about it ... but were told that 327x terminals
were targeted at data entry market and didn't have the requirements for
interactive response and human factors that would be needed for
something like interactive computing. as seen in some of the referenced
performance comparisons ... say
http://www.garlic.com/~lynn/2001m.html#19 3270 protocol

... it was much more difficult to achieve subsecond response with
3274/3278 vis-a-vis 3272/3277. However, for mvs/tso with system response
already on the order of a second (or much worse) ... it was pretty
negligible consideration. however, heavily loaded vm/cms systems tended
to be more on the order of a quarter second (or less, one system i had
carefeeding of ... was on the order of .11 seconds 90th percentile for
trivial interactive under heavy load).

past posts mentioning some (hardware) fixes to 3277 ... and not being
able to doing anything with later 3278/3279 because even that bit of
electronics had been moved back into the controller (and/or some other
3272/3277 issues vis-a-vis 3274/3278).
http://www.garlic.com/~lynn/94.html#23 CP spooling  programming technology
http://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of 
his professional life
http://www.garlic.com/~lynn/99.html#28 IBM S/360
http://www.garlic.com/~lynn/99.html#69 System/1 ?
http://www.garlic.com/~lynn/99.html#193 Back to the original mainframe model?
http://www.garlic.com/~lynn/99.html#239 IBM UC info
http://www.garlic.com/~lynn/2000c.html#63 Does the word mainframe still have 
a meaning?
http://www.garlic.com/~lynn/2000c.html#65 Does the word mainframe still have 
a meaning?
http://www.garlic.com/~lynn/2000c.html#66 Does the word mainframe still have 
a meaning?
http://www.garlic.com/~lynn/2000c.html#67 Does the word mainframe still have 
a meaning?
http://www.garlic.com/~lynn/2000d.html#12 4341 was Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000g.html#23 IBM's mess
http://www.garlic.com/~lynn/2001b.html#12 Now early Arpanet security
http://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers that ran 
as slow as today's supercompu
http://www.garlic.com/~lynn/2001i.html#51 DARPA was: Short Watson Biography
http://www.garlic.com/~lynn/2001k.html#30 3270 protocol
http://www.garlic.com/~lynn/2001k.html#33 3270 protocol
http://www.garlic.com/~lynn/2001k.html#44 3270 protocol
http://www.garlic.com/~lynn/2001k.html#46 3270 protocol
http://www.garlic.com/~lynn/2001l.html#32 mainframe question
http://www.garlic.com/~lynn/2001m.html#17 3270 protocol
http://www.garlic.com/~lynn/2001m.html#19 3270 protocol
http://www.garlic.com/~lynn/2002f.html#14 Mail system scalability (Was: Re: 
Itanium troubles)
http://www.garlic.com/~lynn/2002i.html#43 CDC6600 - just how powerful a machine 
was it?
http://www.garlic.com/~lynn/2002i.html#48 CDC6600 - just how powerful a machine 
was it?
http://www.garlic.com/~lynn/2002i.html#50 CDC6600 - just how powerful a machine 
was it?
http://www.garlic.com/~lynn/2002j.html#67 Total Computing Power
http://www.garlic.com/~lynn/2002j.html#74 Itanium2 power limited?
http://www.garlic.com/~lynn/2002j.html#77 IBM 327x terminals and controllers 
(was Re: Itanium2 power
http://www.garlic.com/~lynn/2002k.html#2 IBM 327x terminals and controllers 
(was Re: Itanium2 power
http://www.garlic.com/~lynn/2002k.html#6 IBM 327x terminals and controllers 
(was Re: Itanium2 power
http://www.garlic.com/~lynn/2002m.html#24 Original K  R C Compilers
http://www.garlic.com/~lynn/2002p.html#29 Vector display systems
http://www.garlic.com/~lynn/2002q.html#51 windows office xp
http://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives

<    4   5   6   7   8   9   10   11   12   13   >