typos.  nuts.  more below.

On 6/10/06, Roy A. Crabtree <[EMAIL PROTECTED]> wrote:

Actually,  is _faster_.  In most ways;  one of the points of brilliance in
its design and designers.

I do not expect the world's foremost optimizing incremental compiler (but
I almost get it,
simply because _clean_code_ is so rare that it performs almost the same
interpreted
as dirty language implementations and programs do comled with an
optimizer).


  _compiled_, not comled.

   Either a sloppy laptop keyboard, me as a bad typist, or a viral attack
mimicking typos.  Take yout pick.

Look, here is the same point made in a different field:

Look at news:alt.callahans under my name and "...that stupid?"
or:

I woudl suggest looking at the URL
http://www.app.com/apps/pbcs.dll/article?AID=/20050717/BUSINESS/50717...<http://www.app.com/apps/pbcs.dll/article?AID=/20050717/BUSINESS/507170346/1003>.

It contains a very brief description of the efforts of the same
architect who put up the St. Louis arch.

The successors to Bell Labs (Lucent then Avaya) claim it is too expensive to
operate.
However, they only have 1500 in a building that suported 5600+.

If you run a packing frction (like computer systems SW) of 22-24% , you are
guaranteed
to increase operatig costs 4-5 fold.

But the relatively high arrogance of most of theis new era's busiess
management is that

  IF there is a failure THEN it cannot be due to their incomptence or
arrogance.
  THEREFORE it costs too much to operate.

The building's design brilliance has many points:

- glass for heat and thermal control: opening the buildig's top is one of
many innovations.
- the internal sructures are totally decouled
- 2 million squae feet of space and 75,000,000 cubic feet of space.
- Modular office erector set tinkertoy Leggo set up and breakdown.

That i enough for offices for 50,000 if the packing fracion were 100%, using
1500 cbic feet per office.

Suppose the main oopertional costs (packing fraction, access/parking, and
energy cost) are too high.

SOLVE them:

1) Increase the packing fraction:  Bell Labs never used More than about 10%
loading.  Change the building office internal layouts.
   Re-engienner them using modrn modular concepts and materials
engineering, then refurbish.  Cost 1-2 million.

2)  Access/parking:  Oh so triial o a 475 acre site, or solve the modern
way:  parking structures or local trams or whatever.
   For 20,000 occupancy: 5-10 million (small costs).

3)  energy cost too high:

   A) Physical plant:  replace, upgrade, rebuild.   As part of an internal
refurbish, incrementally cheap, 5-10 million. Slightly more otherwise.
   B)  Glass tchnology out of dat: doubtful.  BUT:  plastic resurface or
spray after cleaning.  1-5 million.
   C)  Leaks on the glass panels: reseal them all by reovig/replacing
seals? Sure, but why bother?
         reeal using over/apppliques on the seals.

If you do these things staged and/or incrementalized, the results could be
kept within pace of rentals.
especially if you modernized the spupport services ongoing, and rented out
the older facilities until replacement.

I am sure there a re many techniques I am not aware of that would do this.

But the net arrogance and stupidity of the management of Lucnet and Avaya
(or their successors) appers to have forgotten to use what they inherited
from the reult of the Bell System breakup.

Sigh.

Similar re-engineering on the twin WTC towers could have done much.

either in retrospect, or at the beginning.

Retrospect:  Placing fire stairwells at the corners.

   More costly.  Sure.
  Doable?  Absolutely.

   Why not done?  The realtors wanted the prime corner locations to sell to
customers.
   Thus: the original design was REQUIRED to place the firewells wrongly.
   And retrofittting would not have been approved.  even though sound.

At original design time:

(Small conceptual changes at the begininng, such as planning for movable
floors (tracked slotted joisting)
 or protecting fire standpipe inside the external shell structure: blast
and fire safeguard of the standpipe,
  and water to cool the external shell ad other members from heat
softening).

    If thre standpipewas _designed_ to fit into the external frame, then
acces points for pumping stations would
   be possible.  (YOu do NOT have to PLACE the station; but a fire
department interface would put portable
   pumping stations plus rerouting hose in the basement access points, or
be conformed to alredy used NYFD
   tools).

   If the flooring ws DESIGNED to allow interna movement (rarelym, when a
renter wantd more headroom),
   then the joisting would have been designed (too long to explain) and the
weight of the floor changed
  (cncrete would likely have been reeplaced by better use of alternative
methods of lighter weight).

  Take down the floorig weight, and you dump off say 85% of the weight of
each floor, and some 400,000
   tons of concrete would go away.

   THen, additional support bands (think of a crossing form pattern) could
then be use either at original design
   time (to reduce sway: loss of 1/3rd building weight would make it more
mobile) or later as a retrofit
   to icrase buildig suppport.

   And, if you have movalbe floor as a design CONCEPT, then increased
hardiness in the STAYS that
   would be used to HLD the floors would have come up: allowing for
periodic etra durable
   stays to be used.

     In short, the progressive collapse might not have occured.

Even if all these are correct and workable (they may not be, or may nnot
provide the benefits I am suggesting)
I AM NOT criticizing the original team (the design was brilliat for its time
and the amount of time and
resouces they wee given).

But; I _am_ critiquing later genertions for not using more modern materials
technology to _fix_ the problembefore a collapse occurred.

(PLEASE do not tell me it could not be anticipated).

Similar things are true in the cuomputer world using analogies to carry the
concepts.

_THis_ is the problem facig technologists:  failure to use and teach what we
lready have to solve existing problems
before NEW ones (like Katrina/New Orleans: known for 120 years by report to
Congress) overwhelm us as a civilization.

It can be done, but you really have to open up your head/hear/soul space to
do it.

The first step IMOHO, is to admit internlly and recognie where _you_ see
_you_ have been an idiot.

    (Not by my standards or measures: your own).

Then: fix thar first.  It is (sigh, painfully, first person) a lifelong
painful process at times.

But it can also be fun.  If you work at it corectly.

The USA has done a very poor job over the lat 50 years of that.  We need to
shift gears.  Carefully.  NOW.
TYime is always crtical.

CHeers.  More later.

Thumbs up to Jsoft.

If you ar satisfied, fine.

But me, I do get tired of waiting 3-15 seconds ofor Win/XP to upll up the
d-----mned- drop up menu ....


On 6/10/06, Randy MacDonald <[EMAIL PROTECTED]> wrote:
>
> Hello Roy;
>
> No low expectations here! I've been happy with APL's performance since
> 1991.
> Sure, APL/J is slower, but it's no longer easy to notice.
>
> ------------------------------------------------------------------------
>
> |\/| Randy A MacDonald   | APL: If you can say it, it's done.. (ram)
> |/\| [EMAIL PROTECTED]  |
> |\ |                     |If you cannot describe what you are doing
> BSc(Math) UNBF'83        þas a process, you don't know what you're
> doing.
> Sapere Aude              |     - W. E. Deming
> Natural Born APL'er      | Demo website: http://156.34.89.50/
> -----------------------------------------------------(INTP)----{ gnat }-
>
>
> ----- Original Message -----
> From: "Roy A. Crabtree" <[EMAIL PROTECTED]>
> To: "General forum" < [email protected]>
> Sent: Saturday, June 10, 2006 11:02 PM
> Subject: Re: [Jgeneral] spreadsheets
>
>
> Wow.  Perhaps your expectations over the years hve been istortd by
> repeated
> disappontment;
> leading to the _expectation_ of paying a $$$ and getting a fraction
> back.
>
> On 6/10/06, Randy MacDonald <[EMAIL PROTECTED]> wrote:
> >
> > Hello J.R.;
> >
> > I'd love to see examples of where today's computers seem slower than
> those
> > of the 1970s. It's just not the impression I get.
>
>
>
> I do not get the impression that today's _computers_ seem slower tha the
> 70s.
>
> I do get the impression that today's _software_ _is_ slower.
> Relatively, as well as in many to most cases absolutely.
>
> Do the math.  I leave out networking, graphics, 2ndary storage, etc;
> just an
> oversimpified firrst cut anaysis;
> please note that the computer I am citing is in fact NOT 70s but late
> 80s; I
> have to do this to even begin
> to demonstrate how far the difference actually is:
>
> 486 (32bit/32bit) DX 33MHz with 256KB cache at 20nsec using 64B RAM at
> 120
> to 150 nsec
> P6 (64bit/32bit) 3072MHz 4096 KB cache at 2 nsec using 1024 MB DDR at 15
>
> nsec (roughly).
>
> P6/486: .realtive rchitectue comparison, leave it 1:1
> 32:64bit: 2:1
> 32/32: 1:1
> 33/3072:  93:1
> 256/4096:  16:1
> 2:20: 10:1
> 1024/64: 16:1 (but main memory on a WIndows box seems to need 4GB for
> mutitasking and graphics to work well): 64:1
> RAM/DDR: Should be thunk width ratio, lets leave it at 1:1
> 15/150: 10:1
> You do not like my ratios or figures?  Use(r) your own.
>
> HW: 1 x 2 x 1 x 93(100) x 16 x 10 x 16/64 x 1 x 10 == 51200/204800
>
>     You want to argue agais the other throughput metrics?  Feel free.
>
> Yah, your rough hardware throughput power envelope, comptationally, is
> 50,000 to 200,000 times more than the late 1980s early 1990s.
>
> ... oh, I forgot the $$$ factor ...   you could buy the 486 system for
> around $1500-4500, where the 3GHz is $750-1500;
>     build your own and strip to just the populated motherboard, tke out
> laptop factor and you get $250 versus $500.
>
>      SO you lose a 2:1 factor, relaively.
>
>    But:  salaries and inflation tae back in give you another
> discretionary
> (though today such a purchase is NOT) ratio fo 4:1 _for_.
>
> You understand why I cannot easily go back to the 70s?
>
>      I lerned on an IBM 1401.  An 8 bit minus architecture with an
> instruction set 1,000 times more powerful than todays;
>      12K total core with liklely NO cache memory (direct) on God KNows
> what
> clock rate (go look it up).
>
>            This inb turn ran the entire Guilford County school district,
> and
> left over tie was used for high schoo student projects.
>
> SO, likely, the step p is another 100,00 times at LEAST.
> -----
> Now take the software then versus he software today: lat 80s versus now.
>
> I cannot do so without brraing out the actual categories, and do not
> have
> the time to do any kind of real analysis:
>
>    I can only point to the monumental levels of SLOP involved in todays'
> software
> (J being a very powerful pocket example that shows it does not need to
> be
> so).
>
> I) OS software:  The number of assembly steps to get a sinlge OS level
> executive ABI done:
>       70s: 20-100 instructions (go look)
>       80s:  1000-4000 instructions
>       90s:  5000-25000 instructions.
>
> Approximately; go look up your own factors, yeeesh.
>
>        This INCLUDES the SAME functions that WERE present then, as wll
> as
> the later ones of more complexity.
>
>        Step down ratios seem to be 6:1 for the late 80s and up to
> 10,000:1
> since the 70s.
>
>      In other words, common OS functin cost 10,000 times more compared
> to
> 1970s OSes.
>      Oversimplified?  Yes.  Always correct ratios?  No,  But:  whne you
> establish ANY ratio over 4:1 you then have to
>      worry about WSS CHR (look it up) verus main memory to cache to
> processor speeds and aliased goal missing for clock stall.
>
>          Your 100:1 processor gets down fro 4th gear to 1rst really fast
> that way, all software bound.
>
>         Think of a Lamborghini Omtache gon' down the highway doin' 5MPH.
>
>      because: who does eithe overlays (address space reduction, memory
> reduction) or cache memory hit rates (HW monitor needed) or
>     cache versus main memory hit rate (static analysis at optimization
> time,
> be it first compile or final load), or processor/phase process binding
>     to maintain CHR (affining seem to be a lost art outside of SGI and a
> few
> other really big efforts).
>
>          You lose another 10,000:1
>
> How about graphics?
>
>     70s:  MIT uisng a 4MHz CSMA/CD runs 10 FPS graphics in 16 bit color
> from
> a single processor to X stations ...
>     80s/early: I used to use a _386_ running at _12_ MHz WITHOUT a smart
> graphics card on X10 to show how slow Windows was AND IS:
>
>        I could get 480/640x256 full color real video travelling at 14-18
> FPS
> and stil have resoures left over to do real work.
>
>      Today: I fail to understand why my network aphics feed is NOt using
> store& forward nodal feeds nationwide from the central server
>         to maintain adquate class D (IP sleect broadcat) data pools for
> EVERY video multimedia feed to remain at full speed, and also to
> dynamicaly
>        adjust itself to the end user's needed bandwidth: some need 1
> inch
> by  inch; others need a laptop 17 inch at 32 bit.
>
>    As ear s I can te the common software currently in use is actually
> using
> 1,000,000 time the resources it actually needs to consume;
>    both on the network, as well as on the DTE (look it up: in this case
> the
> displayng PC).  More correctly, this ratio:
>
>                OLD needed versus used / NEW needed versus used
>
>      is improvable by about 1e5 to 1e8.
>
>      (If you disagree, I will want $$$ up front before I go through the
> whole analysis).
>
> How about primary versus tiered 2ndary storage speeds and buffering?  Do
> I
> need to do the analysis?
>
>     ANyone know of an OS or compiler or DB or OLTP that does H versus Z
> buffer analysis or
>      localized CPU potting (most disk drives have a CPU smart enough to
> form
> a true array processor;
>      you can push the TPC benchmark rates up to 1000 times the
> performance
> at 100% storage
>       usage if you apply that lever: defeatig the purpose of the
> specifications).
>
>      How about selective processing load stall (See TSO) for he purpose
> of
> accumulating a list (See HASP)
>      of jobs to locally apply in oder to reduce critical resource
> contstraint stalls (se disk head movement delys verus
>      H,I, T, and Z scheduling).  Much less a full tiling anti-alising
> anti-missed deadline packig fraction optimizer in real time.
>
>     JIT or incremental optimization or re-optimizing when conditions
> shift?
> Not a chance.
>
>       _yet_.  RSN.  Watch the FP and gridding arenas.
>
> I have not even begun to list them.  The truly effective folks are
> already
> using these techniques. You just do not hear bout it.
>
> Basically,on almost any problem a single user can invoke on a single
> computer, and MOST of the ones regarded as big on Beowulfs or grids,
>
>     ... if you sneeze and it ain't done, the software (hardware
> integration
> & optimization) suck(s).
>
> Bur; I never got used to paying twice more every two years for more
> sofware
> that ran relatively too much slower, so that in two years ...
>
>     Microswift could do it again.  While I paid for it.
>
> Of course, perhaps I am wrong.
>
> But I doubt it.
>
> (I did not invent the analyses or effiiciency methods: so I get no
> Kudos.
> Ther REALLY  bright people did that; all I  did was READ it;
>
>     and REMEMBER where to apply it).
>
> SO, basically, I am giving a thumbs up to J in hthe area of gridding.
>
> >
> ------------------------------------------------------------------------
> > |\/| Randy A MacDonald   | APL: If you can say it, it's done.. (ram)
> > |/\| [EMAIL PROTECTED]  |
> > |\ |                     |If you cannot describe what you are doing
> > BSc(Math) UNBF'83        þas a process, you don't know what you're
> doing.
> > Sapere Aude              |     - W. E. Deming
> > Natural Born APL'er      | Demo website: http://156.34.89.50/?JGeneral
> > -----------------------------------------------------(INTP)----{ gnat
> }-
> >
> > ----- Original Message -----
> > From: "John Randall" <[EMAIL PROTECTED]>
> > To: "General forum" < [email protected]>
> > Sent: Saturday, June 10, 2006 7:44 PM
> > Subject: Re: [Jgeneral] spreadsheets
> >
> >
> > > Don Guinn wrote:
> > > > Building a spreadsheet is still programming. It's just not
> procedural.
> > > > Entering a document with markup into a word processor is also
> > > > programming. Don't forget the old EAM machines with boards. That
> was
> > > > programming too. I am still impressed by that architecture as
> those
> > were
> > > > machines with cycle times of as little as 3 hertz and could still
> > > > outperform today's PCs with gigahertz speeds.
> > > >
> > >
> > > Point taken.  I think many people are more comfortable with hitting
> > recalc
> > > until the spreadsheet stabilizes than trying to understand how u^:_
> > works.
> > >
> > > I, too am unimpressed with the speed of current computers,
> especially
> > when
> > > they are used interactively.  My first personal computer was a
> TRS-80
> > > Model III, with 16K RAM and a 2 MHz Z80.  My current computer is
> about
> > > 1000 times as fast and has about 50000 times as much memory.  I do
> not
> > see
> > > comparable improvements in performance.  In a recent post, Roy
> Crabtree
> > > gave examples in the same vein.
> > >
> > > Texas Instruments still uses the Z80 (overclocked to 4 MHz) in its
> > > calculators.  This is a chip which has no floating point arithmetic,
> and
> > > does not even have a hardware (integer) multiplication.  The
> arguments
> > for
> > > sticking with this come down to the fact that it is good enough for
> > > interactive use, TI has mature libraries, and power consumption is
> > low.  A
> > > calculator will run for months on AA batteries.  Compare this with
> cell
> > > phones.
> > >
> > > Best wishes,
> > >
> > > John
> > >
> > >
> > >
> ----------------------------------------------------------------------
> > > For information about J forums see
> http://www.jsoftware.com/forums.htm
> > >
> >
> >
> > ----------------------------------------------------------------------
> > For information about J forums see http://www.jsoftware.com/forums.htm
> >
>
>
>
> --
> --
> Roy A. Crabtree
> UNC '76 gaa.lifer#11086
> Room 207 Studio Plus
> 123 East McCullough Drive
> Charlotte, NC 28262-3306
> 336-340-1304 (office/home/cell/vmail)
> 704-510-0108x7404 (voicemail residence)
>
> [EMAIL PROTECTED]
> [EMAIL PROTECTED]
> [EMAIL PROTECTED]
>
> http://www.authorsden.com/royacrabtree
> http://skyscraper.fortunecity.com/activex/720/resume/full.doc
> --
> (c) RAC/IP, ARE,PRO,PAST
> (Copyright) Roy Andrew Crabtree/In Perpetuity
>     All Rights/Reserved Explicitly
>     Public Reuse Only
>     Profits Always Safe Traded
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
>
>
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
>



--
--
Roy A. Crabtree
UNC '76 gaa.lifer#11086
Room 207 Studio Plus
123 East McCullough Drive
Charlotte, NC 28262-3306
336-340-1304 (office/home/cell/vmail)
704-510-0108x7404 (voicemail residence)

[EMAIL PROTECTED]
[EMAIL PROTECTED]
[EMAIL PROTECTED]

http://www.authorsden.com/royacrabtree
http://skyscraper.fortunecity.com/activex/720/resume/full.doc
--
(c) RAC/IP, ARE,PRO,PAST
(Copyright) Roy Andrew Crabtree/In Perpetuity
    All Rights/Reserved Explicitly
    Public Reuse Only
    Profits Always Safe Traded




--
--
Roy A. Crabtree
UNC '76 gaa.lifer#11086
Room 207 Studio Plus
123 East McCullough Drive
Charlotte, NC 28262-3306
336-340-1304 (office/home/cell/vmail)
704-510-0108x7404 (voicemail residence)

[EMAIL PROTECTED]
[EMAIL PROTECTED]
[EMAIL PROTECTED]

http://www.authorsden.com/royacrabtree
http://skyscraper.fortunecity.com/activex/720/resume/full.doc
--
(c) RAC/IP, ARE,PRO,PAST
(Copyright) Roy Andrew Crabtree/In Perpetuity
   All Rights/Reserved Explicitly
   Public Reuse Only
   Profits Always Safe Traded
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to