[email protected] (Shmuel Metz  , Seymour J.) writes:
> I don't know what Agile specifies, but I was involved in rapid
> prototyping and we most definitly had reviews (code and design), unit
> testing and regression testing.

re:
http://www.garlic.com/~lynn/2015c.html#65 A New Performance Model ?
http://www.garlic.com/~lynn/2015c.html#69 A New Performance Model ?

i had continued to work on 370 stuff all during the future system period
(when they were killing off 370 stuff, which is also credited with
giving clone processor makers market foothold) ... and even periodically
ridiculing FS activity (which wasn't exactly a career enhancing
activity) ... past posts mentioning Future System
http://www.garlic.com/~lynn/submain.html#futuresys

Eventually I migrated lots of my stuff from cp67 to vm370
(as part of the product migration from cp67 to vm370 there was a lot of
simplification, including dropping a lot of performance stuff that I had
done as undergraduate in the 60s). some old email
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
including this from 40yrs ago this month ... re: csc/vm, one of my
hobbies was providing enhanced operating systems to internal datacenters
http://www.garlic.com/~lynn/2006w.html#email750430

with implosion of FS, there was mad rush to get stuff back in the
product pipelines, which contributed to picking up some of my stuff
and including in standard release.

the 23Jun1969 unbundling announcement their was legal pressur on the
company to start charging for software, however, the company managed to
make the case that kernel software should still be free ...past posts
http://www.garlic.com/~lynn/submain.html#unbundling 

However with clone processors getting market foothold during the FS
period contributed to decision to start migrating to "charged-for"
kernel software ... initially this was seprate add-on kernel pices ...
and some amount of my kernel software was selected as guinea pig for
charged-for kernel software as the "resource manager" (and I got to
spend some amount of time with business people and lawyers about
software charging policies). some past posts
http://www.garlic.com/~lynn/subtopic.html#fairshare

I had developed automated benchmarking process that included being able
to simulate a wide-variety of configurations and workloads ...
including severe stress testing that initially precipitated system
failures (which had to be fixed/corrected). Side effect of the
stress testing I completely rewrote the kernel serialization process
that eliminated all cases of zombie/hung users and asynchronous
activity failures. some past posts
http://www.garlic.com/~lynn/submain.html#benchmark

In any case, a part of the initial release of "resource manager" there
was over 2000 automated benchmarks that took three months elapsed time
... as part of validating the product. The first 1000 bencharmks were
systematically chosen to cover wide range of known configurations and
workloads (including some heavily stressed operating points well outside
normal environments).

The final set of benchmarks were then under control of a modified
version of the "performance predictor". It would look at detailed data
on all benchmarks run to-date, select combination of configuration and
workload to be run next, predict what the results would look like,
activate the benchmark ... and when it was finished, check the actual
executation data against the predicted (effectively validating both the
resource manager operation as well as the performance predictor
predictions).

The standard product put out maintenance release every month ... and I
got asked to do "resource manager" integration with standard monthly
maintenance release on the same schedule. I countered with updated
integrated maint. release every three months. I claimed that I needed to
perform minimum of 24hrs of benchmarking regression validation before
every new release ... and I had a fulltime job that I was otherwise busy
with ... supporting the product for customers was in spare time from
other duties.

I noticed that other performance products were doing performance
regression testing for major releases similar (or less) to what I was
doing for maintenance release. It was one more time realizing that I
would never fit into standard product development. 

semi-ralated topic drift ... after transferring to SJR, I would wander
around the main plant site. Disk engineering had several mainframes used
for development testing. At the time they were pre-scheduled stand-alone
operation around the clock. At one point they had tried doing testing
under MVS ... hoping to do some concurrent testing. However in that
environment, MVS had 15min MTBF (system crash/hang requiring manual
re-ipl). I offerred to rewrite the I/O supervisor so that it was bullet
proof and never fail ... allowing any amount of concurrent, ondemand
testing ... greatly improving productivity. some past posts
http://www.garlic.com/~lynn/subtopic.html#disk

MVS RAS expressed some interest ... not as I expected, what things were
fixed ... folklore was they tried to have me let go ... for having
pointed out the 15min MTBF. A few years later as 3380 disks were being
released, every one of the standard FE regression error tests were still
resulting in MVS failure (2/3rds of the cases with no indication of what
caused the failure).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to