Re: Microcode, which is a no-go for modern designs

2019-01-09 Thread Eric Korpela via cctalk
On Tue, Jan 8, 2019 at 3:01 PM dwight via cctalk 
wrote:

> To Tell you the truth, I can't think of anything other than speed of
> calculating that should be done in floating point. The speed is because
> we've determined to waste silicon for floating point when we should really
> be using combined operation in integer that are designed to handle multiple
> arrays( and matices ), addition, multiplication and scaling as single
> instructions.


Floating point is useful in the sciences where you are dealing with large
exponent ranges and/or need appropriate rounding.

This will make everyone groan, but somewhere around here I have a C++
template library for fixed point that tracks the result bit width position
and does scaling.

fixed a(1.0); // stored as 0x0100
fixed b(0.5); // stored as 0x0080
fixed c(a*b);  // a*b would be fixed by default, but
multiply is overloaded for different widths
   // stored as 0x0008

Metaprogramming has its uses if you don't mind long compile times and you
understand what's going on under the hood.


Approximations (was Re: Microcode, which is a no-go for modern designs)

2019-01-08 Thread Eric Korpela via cctalk
On Sun, Jan 6, 2019 at 1:00 PM Fred Cisin via cctalk 
wrote:

> Few people (but most are right here) can recite PI to enough digits to
> reach the level of inaccuracy.   And those who believe that PI is exactly
> 22/7 are unaffected by FDIV.   (YES, some schools do still teach that!)
>

Really?  I find it hard to believe any schools taught that as anything
other than
an approximation.   And as an approximation it's not good for much unless
you are
multiplying PI in your head by a factors of 7 a lot (i.e 21*PI ~ 66).
Personally, I use the
PI^2 ~ 10 approximation far more often when doing math in my head.  But
since I'm
almost always sitting at a screen, approximations are less useful than they
used to be.
If I need PI to 600 places, "scale=600;a(1)*4" is always there for the
asking.

The exception is the approximations that provide physical scale.  For
example, 1 km/s ~
1 parsec/million years, or v=4.74 PM/parallax, which provides the
conversion from proper
motion to parallax, 5 magnitudes is a factor of 100.  And of course, "c" in
whatever units
you need.  They are just easier to remember than to look up.

-- 
Eric Korpela
korp...@ssl.berkeley.edu
AST:7731^29u18e3


Re: Microcode, which is a no-go for modern designs

2019-01-07 Thread Fred Cisin via cctalk

Few people (but most are right here) can recite PI to enough digits to
reach the level of inaccuracy.   And those who believe that PI is exactly
22/7 are unaffected by FDIV.   (YES, some schools do still teach that!)


On Mon, 7 Jan 2019, Johnny Eriksson via cctalk wrote:

Why remember the digits, when a small program can provide them?

 +0un qn"E20Un' 0Uh 0uv HK
 Qn
 Qq/10Ut Qh+Qt+48Uw Qw-58"E48Uw %v' Qv"N:Qv,1^T' QwUv Qq-(Qt*10)Uh>
 :Qv,1^T
 !Can you figure out what this macro does before running it?  It was
 written by Stan Rabinowitz with modifications by Mark Bramhall and
 appeared as the Macro of the Month in the Nov. 1977 issue of the TECO
 SIG newsletter, the "Moby Munger".  For information on the TECO Special
 Interest Group, write to Stan at P.O. Box 76, Maynard, Mass. 01754!


Interesting bit!



Why remember the digits, when a small program can provide them?


Maybe, because remembering the first 80 or 90 digits is half as much work 
to remember or type in, as that macro.



The current state of computer "science" "education" fails to even get the 
students to understand that floating point is a rounded off approximation. 
FDIV merely added a small unexpected further degradation to a 
representation that was already inaccurate, and was explicitly an 
approximation.
They often represent a dollar and cents amount as floating point, just to 
avoid figuring out how to insert the PERIOD delimiter.  Use of FDIV is 
inappropriate for calculating sales tax.  NOT because of the FDIV errors, 
which are well within the portion that will be discarded in roundoff.
It should not take until a third semester "Data Structures And Algorithms" 
class, or beyond, for them to learn to not use floating point for cash 
transaction processing.


People who use 3.1416 or 22/7 for PI are not in a position to gripe 
as much as they did, about inaccuracies caused by FDIV.
The point was that people were screaming about errors that were already 
irrelevant to the level of accuracy that they were using, in uses that 
were explicitly NOT INTENDED to be exact.



I am building a base to make a patio table out of a CRASHED 24" RAMAC 
platter, that had been banged around with no effort to store properly for 
half a century.  (Is there a better use for a CRASHED platter?, or a 
better way to display it than under glass as a rustic table top?) 
Neither a value of 3.14 for PI, nor FDIV, will further degrade my level of 
carpentry skills.

I'm considering printing out and including a copy of the RAMAC plaque
http://www.ed-thelen.org/RAMAC/RAMAC_Plaque_v40.pdf


Re: Microcode, which is a no-go for modern designs

2019-01-07 Thread Paul Koning via cctalk



> On Jan 7, 2019, at 3:20 AM, Johnny Eriksson via cctalk 
>  wrote:
> 
>> Few people (but most are right here) can recite PI to enough digits to 
>> reach the level of inaccuracy.   And those who believe that PI is exactly 
>> 22/7 are unaffected by FDIV.   (YES, some schools do still teach that!)
> 
> Why remember the digits, when a small program can provide them?
> 
>  +0un qn"E20Un' 0Uh 0uv HK
>  Qn  Qi<\+2*10+(Qq*qi)Ua 0LK Qi*2-1Uj Qa/QjUq Qa-(Qq*Qj)-2\10I$ Qi-1ui>
>  Qq/10Ut Qh+Qt+48Uw Qw-58"E48Uw %v' Qv"N:Qv,1^T' QwUv Qq-(Qt*10)Uh>
>  :Qv,1^T
>  !Can you figure out what this macro does before running it?  It was
>  written by Stan Rabinowitz with modifications by Mark Bramhall and
>  appeared as the Macro of the Month in the Nov. 1977 issue of the TECO
>  SIG newsletter, the "Moby Munger".  For information on the TECO Special
>  Interest Group, write to Stan at P.O. Box 76, Maynard, Mass. 01754!
> 
> --Johnny

See also  "A spigot algorithm for the digits of pi", American Mathematical 
Monthly,
102 (1995), 195-203.

For extra credit, find and fix the bug in Stan's program.  (Run it to 1000 
digits or so to see the bug.)

paul



Re: Microcode, which is a no-go for modern designs

2019-01-06 Thread Johnny Eriksson via cctalk
> Few people (but most are right here) can recite PI to enough digits to 
> reach the level of inaccuracy.   And those who believe that PI is exactly 
> 22/7 are unaffected by FDIV.   (YES, some schools do still teach that!)

Why remember the digits, when a small program can provide them?

  +0un qn"E20Un' 0Uh 0uv HK
  Qn
  Qq/10Ut Qh+Qt+48Uw Qw-58"E48Uw %v' Qv"N:Qv,1^T' QwUv Qq-(Qt*10)Uh>
  :Qv,1^T
  !Can you figure out what this macro does before running it?  It was
  written by Stan Rabinowitz with modifications by Mark Bramhall and
  appeared as the Macro of the Month in the Nov. 1977 issue of the TECO
  SIG newsletter, the "Moby Munger".  For information on the TECO Special
  Interest Group, write to Stan at P.O. Box 76, Maynard, Mass. 01754!

--Johnny


Re: Microcode, which is a no-go for modern designs

2019-01-06 Thread Fred Cisin via cctalk

Pentiums and it was a real hassle to have to field all those beefs from
customers whose EXPENSIVE processors couldn't divide accurately.


no
It was a real hassle to have to field all those beefs from customers who 
had a PERCEPTION that their expensive processors Wouldn't divide 
accurately.


There was a serious problem with public perception, and further fueled by 
talk show comedians, that all bank statements would be wrong, that 
missiles would hit the wrong cities, that airplanes couldn't find the 
right airport, . . .   AND that all arithmetic in all computers is done 
with floating point.


Few people (but most are right here) can recite PI to enough digits to 
reach the level of inaccuracy.   And those who believe that PI is exactly 
22/7 are unaffected by FDIV.   (YES, some schools do still teach that!)



Intel needed to do much better on their PR.  There was a public perception 
that Intel said that they would only replace them for people who could 
PROVE that their work was directly affected.


Instead, Intel needed to make it CLEAR that "ALL will be replaced, at no 
charge.  But, we need a little time to make a few more, SO, we will start 
by replacing those for which the work is directly affected, and replace 
ALL of them as quickly as more are made."


MOST owners would not hit the error during the life of the machine. Most 
power lUsers would have already upgraded to a newer machine (those who 
were screaming the loudest, "upgrade" to a newer machine several times 
a year, even though they don't replace their car EVERY year).


--
Grumpy Ol' Fred ci...@xenosoft.com
3.14159265358979



Re: Microcode, which is a no-go for modern designs

2019-01-06 Thread Jeffrey S. Worley via cctalk
On Sun, 2019-01-06 at 11:08 -0800, Josh Dersch wrote:
> That's a good trick, given that the K5 came out in 1996 and the K6 in
> 1997, the FDIV issue blew up in late 1994.

Memory is like that.  The FDIV bug didn't go away because it was
announced, the chips stayed on desktops and our diagnostic software
frequently contained the FDIV patch to deal with such, for the rest of
the decade.

I went from a 486dlc-40 to a 4x86 dx2 80 to a k5 133 to a k6 to a k6 to
a k6 to a celeron.  Amd kept releasing faster k6's. My last, in the
late 90's, was IIRC a 333mhz model.

I was a tech in Miami at the time FDIV happened, working for Victors
DataSouth and it's Novell networks.  My servers ran 486's but we sold
Pentiums and it was a real hassle to have to field all those beefs from
customers whose EXPENSIVE processors couldn't divide accurately.

In 95' I went to work in Asheville, NC for Uptime Computer Services and
saw a bunch of machines cross my desk which needed the software patch.

In 2000 I was working with Bits and Bytes computer services.


Best,

Jeff




Re: Microcode, which is a no-go for modern designs

2019-01-06 Thread Grant Taylor via cctalk

On 1/6/19 11:59 AM, Jeffrey S. Worley via cctalk wrote:
I was a tech in the 90's when the original Pentium FDIV bug was storming. 
The issue was confined to the integrated floating point portion of the 
processor and was therefore rarely an issue as the vast majority of 
software did not use the mathco portion of the chip.  Only a handful of 
applications and relative handful of users were affected.  This became 
Intel's position on the matter and they hoped the issue would just die 
down to those handful whom they would provide new chips.


The issue did not die down and the bad press forced the decision to 
replace ALL pentiums affected.  Only a relative few were actually replaced 
in the home and small business arena.  A software patch was a common 
solution to the problem.  It masssaged input to the FDIV instruction to 
produce a corrected result and worked pretty well as I recall.


I suspect that Intel is longing for the Pentium FDIV bug days after the 
speculative execution issues that have surfaced (and gained traction) in 
2018.




--
Grant. . . .
unix || die


Re: Microcode, which is a no-go for modern designs

2019-01-06 Thread Josh Dersch via cctalk
On Sun, Jan 6, 2019 at 10:59 AM Jeffrey S. Worley via cctalk <
cctalk@classiccmp.org> wrote:

> On Sat, 2019-01-05 at 12:00 -0600, cctalk-requ...@classiccmp.org wrote:
> > Re: Microcode, which is a no-go for modern designs
>
> I was a tech in the 90's when the original Pentium FDIV bug was
> storming.  The issue was confined to the integrated floating point
> portion of the processor and was therefore rarely an issue as the vast
> majority of software did not use the mathco portion of the chip.  Only
> a handful of applications and relative handful of users were affected.
> This became Intel's position on the matter and they hoped the issue
> would just die down to those handful whom they would provide new chips.
>
> The issue did not die down and the bad press forced the decision to
> replace ALL pentiums affected.  Only a relative few were actually
> replaced in the home and small business arena.  A software patch was a
> common solution to the problem.  It masssaged input to the FDIV
> instruction to produce a corrected result and worked pretty well as I
> recall.
>
> At the time of the storm, the Pentium was still pretty new and very
> expensive.  Most folks were getting along with AMD k5 and k6
> processors.  I WAS.  I went from k6 to Celeron.
>

That's a good trick, given that the K5 came out in 1996 and the K6 in 1997,
the FDIV issue blew up in late 1994.

- Josh



>
> Best
>
> Jeff
>
>


Re: Microcode, which is a no-go for modern designs

2019-01-06 Thread Jeffrey S. Worley via cctalk
What defines a 'modern processor'.  The term is pretty slippery.

The Crusoe used microcode to emulate x86 and could therefore emulate
any processor architecture Transmeta wanted.

Crusoe was a pioneer in the low power market, the processor dynamically
clocked itself in very small steps depending on need.  This is a
familiar feature now but was pretty revolutionary for the time. 
Interestingly, Linux Torvalds was in on the design and was on the board
of Transmeta.  A fair number were sold to Sony for their VIAO series of
notebooks.

Does Crusoe qualify as a 'modern' processor?  In my book yes, but I
have a very old book.. :0
best,

Jeff



Re: Microcode, which is a no-go for modern designs

2019-01-06 Thread Jeffrey S. Worley via cctalk
On Sat, 2019-01-05 at 12:00 -0600, cctalk-requ...@classiccmp.org wrote:
> Re: Microcode, which is a no-go for modern designs

I was a tech in the 90's when the original Pentium FDIV bug was
storming.  The issue was confined to the integrated floating point
portion of the processor and was therefore rarely an issue as the vast
majority of software did not use the mathco portion of the chip.  Only
a handful of applications and relative handful of users were affected. 
This became Intel's position on the matter and they hoped the issue
would just die down to those handful whom they would provide new chips.

The issue did not die down and the bad press forced the decision to
replace ALL pentiums affected.  Only a relative few were actually
replaced in the home and small business arena.  A software patch was a
common solution to the problem.  It masssaged input to the FDIV
instruction to produce a corrected result and worked pretty well as I
recall.

At the time of the storm, the Pentium was still pretty new and very
expensive.  Most folks were getting along with AMD k5 and k6
processors.  I WAS.  I went from k6 to Celeron.

Best

Jeff 



Re: Microcode, which is a no-go for modern designs

2019-01-04 Thread Eric Smith via cctalk
On Fri, Jan 4, 2019 at 8:08 AM dwight via cctalk 
wrote:

> May ability to understand these papers is somewhat limited. If I
> understand correctly the following.
> Most divide routines that I've seen allow the remainder to be 1,0,-1
> relative to the normal remainder. The answer will converge as the error of
> the remainder never leaves this range.
> In the case of the pentium, the remainder is 2,1,0,-1,-2. This allows the
> division to converge on the answer quicker. The error was that if the
> remainder was right on one edge it would eventually fall of the edge and
> not converge. From the paper, that would be the 5 1's in a row, of the
> divisor.
>

My interpretation is that with the five table entries with the wrong
values, it always converges, but for some input values it converges to the
wrong answer. I could be wrong, though.

In any case, I'm sure there were a lot of people at Intel that were unhappy
that they couldn't patch this with a microcode fix or workaround. Although
Intel relented and offered to replace ALL affected Pentiums, apparently a
substantial number of the buggy ones were never sent in.


Re: Microcode, which is a no-go for modern designs

2019-01-04 Thread dwight via cctalk
May ability to understand these papers is somewhat limited. If I understand 
correctly the following.
Most divide routines that I've seen allow the remainder to be 1,0,-1 relative 
to the normal remainder. The answer will converge as the error of the remainder 
never leaves this range.
In the case of the pentium, the remainder is 2,1,0,-1,-2. This allows the 
division to converge on the answer quicker. The error was that if the remainder 
was right on one edge it would eventually fall of the edge and not converge. 
From the paper, that would be the 5 1's in a row, of the divisor.
At least that is my understanding. It is to early in the morning for me.
Dwight


From: Eric Smith 
Sent: Thursday, January 3, 2019 11:55 PM
To: dwight; General Discussion: On-Topic and Off-Topic Posts
Subject: Re: Microcode, which is a no-go for modern designs

And the original analysis paper, "It Takes Six Ones to Reach a Flaw":
http://www.acsel-lab.com/arithmetic/arith12/papers/ARITH12_Coe.pdf



Re: Microcode, which is a no-go for modern designs

2019-01-03 Thread Eric Smith via cctalk
And the original analysis paper, "It Takes Six Ones to Reach a Flaw":
http://www.acsel-lab.com/arithmetic/arith12/papers/ARITH12_Coe.pdf


Re: Microcode, which is a no-go for modern designs

2019-01-03 Thread Eric Smith via cctalk
Also
http://www-math.mit.edu/~edelman/homepage/papers/pentiumbug.pdf


Re: Microcode, which is a no-go for modern designs

2019-01-03 Thread Eric Smith via cctalk
On Wed, Jan 2, 2019 at 9:12 PM dwight via cctalk 
wrote:

> I believe that is the one. Intel tried to say it wasn't an issue until it
> was shown that the error was significant when using floating point numbers
> near integer values. I suspect that the fellow that forgot to include the
> mask file for that ROM got a bad review.
>

The majority of the ROM contents were present. Apparently five entries were
missing.
http://www.verifsudha.com/2017/09/11/pentium-fdiv-bug-curious-engineer/


Re: Microcode, which is a no-go for modern designs

2019-01-02 Thread dwight via cctalk
I believe that is the one. Intel tried to say it wasn't an issue until it was 
shown that the error was significant when using floating point numbers near 
integer values. I suspect that the fellow that forgot to include the mask file 
for that ROM got a bad review.
Dwight


From: Eric Smith 
Sent: Wednesday, January 2, 2019 3:42 PM
To: dwight; General Discussion: On-Topic and Off-Topic Posts
Subject: Re: Microcode, which is a no-go for modern designs

On Wed, Jan 2, 2019 at 4:12 PM dwight via cctalk 
mailto:cctalk@classiccmp.org>> wrote:
I  thought I'd note that the divide problem couldn't have been patched with a 
micro code patch.

If you're talking about the Pentium FDIV bug, present on the early 80501 chips 
(60 and 66 MHz) and 80502 chips (75, 90, and 100 MHz), they weren't able to fix 
that with a microcode patch. They actually issued a recall for those chips.

However, Intel has successfully fixed other bugs using microcode patches, 
including some but not all of the recent speculative execution side channel 
problems (Meltdown and Spectre). They have also used microcode patches to 
disable instructions that were broken and couldn't be fixed by microcode, 
including the TSX-NI instructions of some Haswell, broadwell, and Skylake CPUs.




Re: Microcode, which is a no-go for modern designs

2019-01-02 Thread Eric Smith via cctalk
On Wed, Jan 2, 2019 at 4:12 PM dwight via cctalk 
wrote:

> I  thought I'd note that the divide problem couldn't have been patched
> with a micro code patch.


If you're talking about the Pentium FDIV bug, present on the early 80501
chips (60 and 66 MHz) and 80502 chips (75, 90, and 100 MHz), they weren't
able to fix that with a microcode patch. They actually issued a recall for
those chips.

However, Intel has successfully fixed other bugs using microcode patches,
including some but not all of the recent speculative execution side channel
problems (Meltdown and Spectre). They have also used microcode patches to
disable instructions that were broken and couldn't be fixed by microcode,
including the TSX-NI instructions of some Haswell, broadwell, and Skylake
CPUs.


Re: Microcode, which is a no-go for modern designs

2019-01-02 Thread dwight via cctalk
I  thought I'd note that the divide problem couldn't have been patched with a 
micro code patch. It was because one of the ROM arrays used as part of the 
divide lookup was missing its data. It would have been much more than a simple 
patch to fix. It would have had to go back to a full subroutine patch.
Today's processors are still memory bound for speed, even with local cache. It 
is mostly poor coding from the compilers for instruction cache issue but data 
is also a problem as larger memory addressing has made it so that cache misses 
are more common.
Some of the instruction misses are helped by pipe depth but misses become more 
of an issue as one has a deeper pipe. Again, it is the current compilers that 
often make larger pipe depth impractical. It was this that made the need for 
speculative execution necessary. This was the cause of all security issues of 
late.
In any case, it all comes back to memory latency. All the tricks to minimize 
its effect have caused other issues. It is still the biggest single issue 
blocking higher speed processors.
The 5 GHz wall is also there but is being fudged around by minimizing the 
active circuits at any one time.  It buys a little but don't expect to see 10 
GHz processors any time soon on silicon.
Dwight

From: cctalk  on behalf of Diane Bruce via 
cctalk 
Sent: Wednesday, January 2, 2019 12:09 PM
To: Paul Koning; General Discussion: On-Topic and Off-Topic Posts
Subject: Re: Microcode, which is a no-go for modern designs

On Wed, Jan 02, 2019 at 02:37:44PM -0500, Paul Koning via cctalk wrote:
>
>
> > On Jan 2, 2019, at 2:31 PM, Chuck Guzis via cctalk  
> > wrote:
> >
> > On 1/2/19 10:44 AM, Guy Sotomayor Jr wrote:
> >
> >> Also, recall that there are different forms of micro-code: horizontal
> >> and vertical.  I think that IBM (in the S/360, S/370, S/390, z/Series)
> >> uses the term micro-code for horizontal micro-code and millicode
> >> for vertical microcode.
> >
> > On the CDC STAR-100, "microcode" as such was a relatively recent concept
> > and the designers went overboard, mostly because of an ill-defined
> > customer base (hence, BCD and other commerical-class instructions, like
> > translate, edit and mark, etc.).  The STAR is basically a RISC-type
> > vector architecture with a pile of microcoded instructions bolted on.
> > ...
> > For a compiler writer, or even an assembly coder, this was more of a
> > problem--which combination of instructions could be used to the greatest
> > effect?  And why do I have to have the hardware manual on my desk to
> > look up instructions?
>
> That reminds me of the Motorola 68040.  I used that at DEC in a high speed 
> switch (DECswitch 900 -- FDDI to 6 Ethernet ports).  When studying the 
> instruction timings, I realized there is a "RISC subset" of the instructions 
> that run fast, a cycle or so per instruction.  But the more complex 
> instructions are much slower.  So the conclusion for a fastpath writer is to 
> use the RISC subset and pretend the fancy addressing mode instructions do not 
> exist.


Which then reminds me further of the Coldfire processor which
*did* remove the more complex instructions from the chip!


>
>paul
>

Diane
--
- d...@freebsd.org d...@db.net http://artemis.db.net/~db


Re: Microcode, which is a no-go for modern designs

2019-01-02 Thread Paul Koning via cctalk



> On Jan 2, 2019, at 2:31 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 1/2/19 10:44 AM, Guy Sotomayor Jr wrote:
> 
>> Also, recall that there are different forms of micro-code: horizontal
>> and vertical.  I think that IBM (in the S/360, S/370, S/390, z/Series)
>> uses the term micro-code for horizontal micro-code and millicode
>> for vertical microcode.
> 
> On the CDC STAR-100, "microcode" as such was a relatively recent concept
> and the designers went overboard, mostly because of an ill-defined
> customer base (hence, BCD and other commerical-class instructions, like
> translate, edit and mark, etc.).  The STAR is basically a RISC-type
> vector architecture with a pile of microcoded instructions bolted on.
> ...
> For a compiler writer, or even an assembly coder, this was more of a
> problem--which combination of instructions could be used to the greatest
> effect?  And why do I have to have the hardware manual on my desk to
> look up instructions?

That reminds me of the Motorola 68040.  I used that at DEC in a high speed 
switch (DECswitch 900 -- FDDI to 6 Ethernet ports).  When studying the 
instruction timings, I realized there is a "RISC subset" of the instructions 
that run fast, a cycle or so per instruction.  But the more complex 
instructions are much slower.  So the conclusion for a fastpath writer is to 
use the RISC subset and pretend the fancy addressing mode instructions do not 
exist.

paul



Re: Microcode, which is a no-go for modern designs

2019-01-02 Thread Chuck Guzis via cctalk
On 1/2/19 10:44 AM, Guy Sotomayor Jr wrote:

> Also, recall that there are different forms of micro-code: horizontal
> and vertical.  I think that IBM (in the S/360, S/370, S/390, z/Series)
> uses the term micro-code for horizontal micro-code and millicode
> for vertical microcode.

On the CDC STAR-100, "microcode" as such was a relatively recent concept
and the designers went overboard, mostly because of an ill-defined
customer base (hence, BCD and other commerical-class instructions, like
translate, edit and mark, etc.).  The STAR is basically a RISC-type
vector architecture with a pile of microcoded instructions bolted on.
While this results in a great many instructions, many were used little.
It's hard to grasp that the same guy who designed the CDC 6400 (a RISC
architecture) also designed the STAR-100.

It's worth noting that all 256 8-bit opcodes are used; many are modified
by another 8-bit modifier quantity whose meaning varies greatly.  In
effect, you have something closer to 1000 distinct instructions, if not
more.

For a compiler writer, or even an assembly coder, this was more of a
problem--which combination of instructions could be used to the greatest
effect?  And why do I have to have the hardware manual on my desk to
look up instructions?

Subsequent embodiments of the architecture dropped a great many
microcoded instructions, with, as far as I can tell, no deleterious effect.

The manual for the STAR-100 hardware description is on bitsavers under
cdc/cyber/cyber_200 if you're curious.

Some earlier computers implemented additional instructions by
hard-coding subroutines whose entry address was determined by the
op-code of the instruction.  Macro-coding, if you will.

--Chuck



Re: Microcode, which is a no-go for modern designs

2019-01-02 Thread Guy Sotomayor Jr via cctalk



> On Jan 2, 2019, at 10:22 AM, Chuck Guzis via cctalk  
> wrote:
> 
> On 1/2/19 8:02 AM, Jon Elson via cctalk wrote:
> 
>> Random logic instruction decode was a REAL issue in about 1960 - 1965,
>> when computers were built with discrete transistors.  The IBM 7092, for
>> instance, had 55,000 transistors on 11,000 circuit boards.  I don't know
>> how much of that was instruction decode, but I'll guess that a fair bit
>> was.  The IBM 360's benefited from microcode, allowing them to have a
>> much more complex and orthogonal instruction set with less logic.
>> 
>> But, once ICs were available, the control logic was less of a problem. 
>> But, microcode still made sense, as memory was so slow that performance
>> was dictated by memory cycle time, and the microced did not slow the
>> system down.  Once fast cache became standard, then eliminating
>> performance bottlenecks became important.  And, once we went from lots
>> of SSI chips to implement a CPU to one big chip, then it was possible to
>> implement the control logic within the CPU chip efficiently.
> 
> I don't know--"microcode" in today's world is a very slippery term.   If
> you're talking about vertical microcode, then I'm inclined to agree with
> you.  But even ARM, which is held up as the golden example of
> microcode-less CPU design, is programmed via HDL, which is then compiled
> into a hardware design, at least in instruction decoding. So ARM is a
> programmed implementation.   I suspect that if x86 microcode were to be
> written out in HDL and compiled, Intel could make the same claim.
> 
> I think of it as being akin to "interpreted" vs. "compiled"
> languages--the boundary can be rather fuzzy (e.g. "tokenizing",
> "p-code", "incremental compilation"... etc.)
> 

Remember that ARM licenses it’s ISA as well as implementations.
Some ARM licenses, do their own implementations and those *are*
microcoded.

There are a number of reasons for doing micro-code and a number
of architectures use it especially if the micro-code can be “patched”
(which AFAIK they all do now) to allow for fixing “bugs” once the
chip has been released.  If the bug is severe enough (remember the
DIV bug in the early 80286?) to have a recall done then the overhead
of having patchable micro-code will pay for itself many fold.

It is also important to note, that today’s CPUs are not just a bare
CPU implementing an ISA.  They are embedded in an SoC with
potentially many other micro-controllers/CPUs that are not visible
to the programmer and those are all “micro-coded” and control
various aspects of the SoC.  The last SoC that I worked on had
(in addition to the 8 ARM application CPUs which are micro-coded
BTW) has over 12 other micro-controllers (mostly ARM R5s)and 
4 VLIW DSPs (not to mention several megabytes of SRAM that
is outside of the various caches).  After all you have to do something
with 7 *billion* transistors.  ;-)

Also, recall that there are different forms of micro-code: horizontal
and vertical.  I think that IBM (in the S/360, S/370, S/390, z/Series)
uses the term micro-code for horizontal micro-code and millicode
for vertical microcode.

TTFN - Guy



Re: Microcode, which is a no-go for modern designs

2019-01-02 Thread Chuck Guzis via cctalk
On 1/2/19 8:02 AM, Jon Elson via cctalk wrote:

> Random logic instruction decode was a REAL issue in about 1960 - 1965,
> when computers were built with discrete transistors.  The IBM 7092, for
> instance, had 55,000 transistors on 11,000 circuit boards.  I don't know
> how much of that was instruction decode, but I'll guess that a fair bit
> was.  The IBM 360's benefited from microcode, allowing them to have a
> much more complex and orthogonal instruction set with less logic.
> 
> But, once ICs were available, the control logic was less of a problem. 
> But, microcode still made sense, as memory was so slow that performance
> was dictated by memory cycle time, and the microced did not slow the
> system down.  Once fast cache became standard, then eliminating
> performance bottlenecks became important.  And, once we went from lots
> of SSI chips to implement a CPU to one big chip, then it was possible to
> implement the control logic within the CPU chip efficiently.

I don't know--"microcode" in today's world is a very slippery term.   If
you're talking about vertical microcode, then I'm inclined to agree with
you.  But even ARM, which is held up as the golden example of
microcode-less CPU design, is programmed via HDL, which is then compiled
into a hardware design, at least in instruction decoding. So ARM is a
programmed implementation.   I suspect that if x86 microcode were to be
written out in HDL and compiled, Intel could make the same claim.

I think of it as being akin to "interpreted" vs. "compiled"
languages--the boundary can be rather fuzzy (e.g. "tokenizing",
"p-code", "incremental compilation"... etc.)

--Chuck