Hello Roy;

No low expectations here! I've been happy with APL's performance since 1991.
Sure, APL/J is slower, but it's no longer easy to notice.

------------------------------------------------------------------------
|\/| Randy A MacDonald   | APL: If you can say it, it's done.. (ram)
|/\| [EMAIL PROTECTED]  |
|\ |                     |If you cannot describe what you are doing
BSc(Math) UNBF'83        þas a process, you don't know what you're doing.
Sapere Aude              |     - W. E. Deming
Natural Born APL'er      | Demo website: http://156.34.89.50/
-----------------------------------------------------(INTP)----{ gnat }-

----- Original Message ----- 
From: "Roy A. Crabtree" <[EMAIL PROTECTED]>
To: "General forum" <[email protected]>
Sent: Saturday, June 10, 2006 11:02 PM
Subject: Re: [Jgeneral] spreadsheets


Wow.  Perhaps your expectations over the years hve been istortd by repeated
disappontment;
leading to the _expectation_ of paying a $$$ and getting a fraction back.

On 6/10/06, Randy MacDonald <[EMAIL PROTECTED]> wrote:
>
> Hello J.R.;
>
> I'd love to see examples of where today's computers seem slower than those
> of the 1970s. It's just not the impression I get.



I do not get the impression that today's _computers_ seem slower tha the
70s.

I do get the impression that today's _software_ _is_ slower.
Relatively, as well as in many to most cases absolutely.

Do the math.  I leave out networking, graphics, 2ndary storage, etc; just an
oversimpified firrst cut anaysis;
please note that the computer I am citing is in fact NOT 70s but late 80s; I
have to do this to even begin
to demonstrate how far the difference actually is:

486 (32bit/32bit) DX 33MHz with 256KB cache at 20nsec using 64B RAM at 120
to 150 nsec
P6 (64bit/32bit) 3072MHz 4096 KB cache at 2 nsec using 1024 MB DDR at 15
nsec (roughly).

P6/486: .realtive rchitectue comparison, leave it 1:1
32:64bit: 2:1
32/32: 1:1
33/3072:  93:1
256/4096:  16:1
2:20: 10:1
1024/64: 16:1 (but main memory on a WIndows box seems to need 4GB for
mutitasking and graphics to work well): 64:1
RAM/DDR: Should be thunk width ratio, lets leave it at 1:1
15/150: 10:1
You do not like my ratios or figures?  Use(r) your own.

HW: 1 x 2 x 1 x 93(100) x 16 x 10 x 16/64 x 1 x 10 == 51200/204800

    You want to argue agais the other throughput metrics?  Feel free.

Yah, your rough hardware throughput power envelope, comptationally, is
50,000 to 200,000 times more than the late 1980s early 1990s.

... oh, I forgot the $$$ factor ...   you could buy the 486 system for
around $1500-4500, where the 3GHz is $750-1500;
    build your own and strip to just the populated motherboard, tke out
laptop factor and you get $250 versus $500.

     SO you lose a 2:1 factor, relaively.

   But:  salaries and inflation tae back in give you another discretionary
(though today such a purchase is NOT) ratio fo 4:1 _for_.

You understand why I cannot easily go back to the 70s?

     I lerned on an IBM 1401.  An 8 bit minus architecture with an
instruction set 1,000 times more powerful than todays;
     12K total core with liklely NO cache memory (direct) on God KNows what
clock rate (go look it up).

           This inb turn ran the entire Guilford County school district, and
left over tie was used for high schoo student projects.

SO, likely, the step p is another 100,00 times at LEAST.
-----
Now take the software then versus he software today: lat 80s versus now.
I cannot do so without brraing out the actual categories, and do not have
the time to do any kind of real analysis:

   I can only point to the monumental levels of SLOP involved in todays'
software
 (J being a very powerful pocket example that shows it does not need to be
so).

I) OS software:  The number of assembly steps to get a sinlge OS level
executive ABI done:
      70s: 20-100 instructions (go look)
      80s:  1000-4000 instructions
      90s:  5000-25000 instructions.

Approximately; go look up your own factors, yeeesh.

       This INCLUDES the SAME functions that WERE present then, as wll as
the later ones of more complexity.

       Step down ratios seem to be 6:1 for the late 80s and up to 10,000:1
since the 70s.

     In other words, common OS functin cost 10,000 times more compared to
1970s OSes.
     Oversimplified?  Yes.  Always correct ratios?  No,  But:  whne you
establish ANY ratio over 4:1 you then have to
     worry about WSS CHR (look it up) verus main memory to cache to
processor speeds and aliased goal missing for clock stall.

         Your 100:1 processor gets down fro 4th gear to 1rst really fast
that way, all software bound.

        Think of a Lamborghini Omtache gon' down the highway doin' 5MPH.

     because: who does eithe overlays (address space reduction, memory
reduction) or cache memory hit rates (HW monitor needed) or
    cache versus main memory hit rate (static analysis at optimization time,
be it first compile or final load), or processor/phase process binding
    to maintain CHR (affining seem to be a lost art outside of SGI and a few
other really big efforts).

         You lose another 10,000:1

How about graphics?

    70s:  MIT uisng a 4MHz CSMA/CD runs 10 FPS graphics in 16 bit color from
a single processor to X stations ...
    80s/early: I used to use a _386_ running at _12_ MHz WITHOUT a smart
graphics card on X10 to show how slow Windows was AND IS:

       I could get 480/640x256 full color real video travelling at 14-18 FPS
and stil have resoures left over to do real work.

     Today: I fail to understand why my network aphics feed is NOt using
store& forward nodal feeds nationwide from the central server
        to maintain adquate class D (IP sleect broadcat) data pools for
EVERY video multimedia feed to remain at full speed, and also to dynamicaly
       adjust itself to the end user's needed bandwidth: some need 1 inch
by  inch; others need a laptop 17 inch at 32 bit.

   As ear s I can te the common software currently in use is actually using
1,000,000 time the resources it actually needs to consume;
   both on the network, as well as on the DTE (look it up: in this case the
displayng PC).  More correctly, this ratio:

               OLD needed versus used / NEW needed versus used

     is improvable by about 1e5 to 1e8.

     (If you disagree, I will want $$$ up front before I go through the
whole analysis).

How about primary versus tiered 2ndary storage speeds and buffering?  Do I
need to do the analysis?

    ANyone know of an OS or compiler or DB or OLTP that does H versus Z
buffer analysis or
     localized CPU potting (most disk drives have a CPU smart enough to form
a true array processor;
     you can push the TPC benchmark rates up to 1000 times the performance
at 100% storage
      usage if you apply that lever: defeatig the purpose of the
specifications).

     How about selective processing load stall (See TSO) for he purpose of
accumulating a list (See HASP)
     of jobs to locally apply in oder to reduce critical resource
contstraint stalls (se disk head movement delys verus
     H,I, T, and Z scheduling).  Much less a full tiling anti-alising
anti-missed deadline packig fraction optimizer in real time.

    JIT or incremental optimization or re-optimizing when conditions shift?
Not a chance.

      _yet_.  RSN.  Watch the FP and gridding arenas.

I have not even begun to list them.  The truly effective folks are already
using these techniques. You just do not hear bout it.

Basically,on almost any problem a single user can invoke on a single
computer, and MOST of the ones regarded as big on Beowulfs or grids,

    ... if you sneeze and it ain't done, the software (hardware integration
& optimization) suck(s).

Bur; I never got used to paying twice more every two years for more sofware
that ran relatively too much slower, so that in two years ...

    Microswift could do it again.  While I paid for it.

Of course, perhaps I am wrong.

But I doubt it.

(I did not invent the analyses or effiiciency methods: so I get no Kudos.
Ther REALLY  bright people did that; all I  did was READ it;

    and REMEMBER where to apply it).

SO, basically, I am giving a thumbs up to J in hthe area of gridding.

> ------------------------------------------------------------------------
> |\/| Randy A MacDonald   | APL: If you can say it, it's done.. (ram)
> |/\| [EMAIL PROTECTED]  |
> |\ |                     |If you cannot describe what you are doing
> BSc(Math) UNBF'83        þas a process, you don't know what you're doing.
> Sapere Aude              |     - W. E. Deming
> Natural Born APL'er      | Demo website: http://156.34.89.50/?JGeneral
> -----------------------------------------------------(INTP)----{ gnat }-
>
> ----- Original Message -----
> From: "John Randall" <[EMAIL PROTECTED]>
> To: "General forum" <[email protected]>
> Sent: Saturday, June 10, 2006 7:44 PM
> Subject: Re: [Jgeneral] spreadsheets
>
>
> > Don Guinn wrote:
> > > Building a spreadsheet is still programming. It's just not procedural.
> > > Entering a document with markup into a word processor is also
> > > programming. Don't forget the old EAM machines with boards. That was
> > > programming too. I am still impressed by that architecture as those
> were
> > > machines with cycle times of as little as 3 hertz and could still
> > > outperform today's PCs with gigahertz speeds.
> > >
> >
> > Point taken.  I think many people are more comfortable with hitting
> recalc
> > until the spreadsheet stabilizes than trying to understand how u^:_
> works.
> >
> > I, too am unimpressed with the speed of current computers, especially
> when
> > they are used interactively.  My first personal computer was a TRS-80
> > Model III, with 16K RAM and a 2 MHz Z80.  My current computer is about
> > 1000 times as fast and has about 50000 times as much memory.  I do not
> see
> > comparable improvements in performance.  In a recent post, Roy Crabtree
> > gave examples in the same vein.
> >
> > Texas Instruments still uses the Z80 (overclocked to 4 MHz) in its
> > calculators.  This is a chip which has no floating point arithmetic, and
> > does not even have a hardware (integer) multiplication.  The arguments
> for
> > sticking with this come down to the fact that it is good enough for
> > interactive use, TI has mature libraries, and power consumption is
> low.  A
> > calculator will run for months on AA batteries.  Compare this with cell
> > phones.
> >
> > Best wishes,
> >
> > John
> >
> >
> > ----------------------------------------------------------------------
> > For information about J forums see http://www.jsoftware.com/forums.htm
> >
>
>
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
>



-- 
--
Roy A. Crabtree
UNC '76 gaa.lifer#11086
Room 207 Studio Plus
123 East McCullough Drive
Charlotte, NC 28262-3306
336-340-1304 (office/home/cell/vmail)
704-510-0108x7404 (voicemail residence)

[EMAIL PROTECTED]
[EMAIL PROTECTED]
[EMAIL PROTECTED]

http://www.authorsden.com/royacrabtree
http://skyscraper.fortunecity.com/activex/720/resume/full.doc
--
(c) RAC/IP, ARE,PRO,PAST
(Copyright) Roy Andrew Crabtree/In Perpetuity
    All Rights/Reserved Explicitly
    Public Reuse Only
    Profits Always Safe Traded
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm


----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to