[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-29 Thread Paul Koning via cctalk



> On Apr 29, 2024, at 1:59 AM, Steve Lewis via cctalk  
> wrote:
> 
> After learning more about the PALM processor in the IBM 5100, it has a
> similarity to the 6502 in that the first 128 bytes of RAM is a "register
> file."  All its registers (R0 to R15, across 4 interrupt "layers") occupy
> those first addresses.  In addition, they are physically on the processor
> itself (not in actual RAM).  

That sort of thing goes way back.  There is of course the PDP-6 and PDP-10 
where the 16 registers alias to the low 16 memory locations.  And the notion of 
registers per interrupt level also appears in the Philips PR-8000, a 24 bit 
minicomputer from the 1960s aimed (among other things) at industrial control 
applications.  That sort of architecture makes interrupt handling very 
efficient since it eliminates state saving.  Unfortunately there's very little 
documentation of that machine anywhere; the little I found is on Bitsavers.

paul




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-28 Thread Steve Lewis via cctalk
After learning more about the PALM processor in the IBM 5100, it has a
similarity to the 6502 in that the first 128 bytes of RAM is a "register
file."  All its registers (R0 to R15, across 4 interrupt "layers") occupy
those first addresses.  In addition, they are physically on the processor
itself (not in actual RAM).   I've been meaning to come up with a sample
PALM program that verifies if there is any performance advantage on that
(that is, something that "does stuff" with data in addresses 0 to 127, then
"does that same stuff" in a higher address like $800+ and see if there is a
noticeable performance difference).   The earliest document I can find on
PALM is from 1972 (or just a few months after the 8080 - the actual initial
production date of PALM is unknown, the 1972 date is just when IBM
documented the instruction set).  But I think the IBM System/3 had a
similar design (or at least, just I recall a mention of the System/3
registers are in RAM -- not sure if that's literal or just
address-access-wise, but in any case the System/3 was said to be pretty
difficult to program for).

Anyhow, to me the PALM may be an earlier "RISC" approach in that its
instructions are always 2-bytes (4-bits for a main opcode -- yes only 16
categories, then a few bits for a "modifier" while the middle pair of bits
specify a register R0 to R15 that the instruction involves), in contrast to
the variable instruction length used by the System/360.There is one
exception in PALM where a kind of "long jump" instruction is followed by
another 2-bytes that is the target address.


You won't hear much about PALM - though I am excited that emulation support
for it has recently been added into MAME!  I ponder if maybe Chuck Peddle
somehow crossed paths with PALM in his early engineering career, or if in
some indirect way there was some lineage or connection there (in a "dude,
your processor doesn't need to be that complicated, I know a system that in
16 instructions does all sort of stuff" kind of way).  BTW, IBM's costs
list during the IBM SCAMP development put the PALM processor card costing
about $300 (in c. 1973).

All that said, in early 1970s, I don't think anyone was yet using the term
RISC vs CISC.

-Steve v*




On Sun, Apr 21, 2024 at 7:50 PM Peter Coghlan via cctalk <
cctalk@classiccmp.org> wrote:

> My first exposure to a computer at home was a BBC Micro with 32kB of RAM
> and
> 32kB of ROM.  Included in this was a 16kB BASIC ROM which was regarded as
> fast
> and powerful, featuring 32 bit integer variables, 40 bit floating point
> variables, variable length strings, structured programming constructs and
> everything accessed by keyword statements rather than PEEK this and POKE
> that.
>
> This was implemented by a humble 6502 running at (mostly) 2MHz, with one 8
> bit
> arithmetic register, two 8 bit index registers, one 8 bit stack pointer,
> a 16 bit program counter and a few flag bits.
>
> I would have expected that a computers featuring a Z80 with its larger
> register
> set, 16 bit arithmetic operations, richer instruction set and general bells
> and whistles would have been able to produce a much superior
> implementation in
> terms of speed or features or both but I never came across one.
>
> Why is that?  Did the Z80 take more cycles to implement it's more complex
> instructions?  Is this an early example of RISC vs CISC?
>
> Regards,
> Peter Coghlan
>
>


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-27 Thread Alexander Schreiber via cctalk
On Mon, Apr 22, 2024 at 02:45:50PM -0500, Mike Katz via cctalk wrote:
> Cycle accurate emulation becomes impossible in the following circumstances:
> 
>  * Branch prediction and pipelining can cause out of order execution
>and the execution path become data dependent.
>  * Cache memory.  It can be very difficult to predict a cache flush or
>cache miss or cache look aside buffer hit
>  * Memory management can inject wait states and cause other cycle
>counting issues
>  * Peripherals can inject unpredictable wait states
>  * Multi-core processors because you don't necessarily know what core
>is doing what and possibly one core waiting on another core.
>  * DMA can cause some CPUs to pause because the bus is busy doing DMA
>transfers (not all processors have this as an issue).
>  * Some CPUs shut down clocks and peripherals if they are not used and
>they take time to re-start.
>  * Any code that waits for some kind of external input.

That was the reason (or so it was explained to me) why automotive ECUs
stuck to relatively simple microcontrollers[0] for a long time because
you could do simple cycle counting to precisely predict timing for
instruction flow - getting the timing wrong for firing the spark
plugs because your execution path takes 1ms longer than expected
tends have Expensive Consequences (TM).

Kind regards,
Alex.
[0] No cache, no branch prediction, no speculative execution, ...
-- 
"Opportunity is missed by most people because it is dressed in overalls and
 looks like work."  -- Thomas A. Edison


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-25 Thread Harald Arnesen via cctalk

Fred Cisin via cctalk [24/04/2024 02.06]:


Did the Dimension 68000 (a multi-processor machine) have Z80 and 6502?

Commodore 128 had Z80 and 6502


Z80 and 8502, actually.
--
Hilsen Harald



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-24 Thread Chuck Guzis via cctalk
On 4/24/24 13:10, David Brownlee via cctalk wrote:

> Typically the second processor would run as primary, using the
> original 6502 to handle input, display and I/O (and on 32016 you
> *really* wanted someone else to deal with anything time critical like
> interrupts :)

Thats the way we did it on the Poppy--the 80186 did the I/O for the
80286.   You could run CP/M-86 or MSDOS without the 80286.   I never did
figure out if the 80286 at 6MHz was outperformed on MSDOS by the 8MHz 80186.

--Chuck





[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-24 Thread ben via cctalk

On 2024-04-24 2:55 p.m., Gordon Henderson via cctalk wrote:

On Wed, 24 Apr 2024, David Brownlee via cctalk wrote:


If we're talking about machines with a Z80 and 6502, it would be
remiss not to link back to the machine mentioned in the original
message - the BBC micro, with its onboard 6502 and "Tube" interface
which could take a second processor option, including
- Z80
- 65C02 / 65C102
- NS 16032 (ahem 32016)
- 8088 (Torch) / 80186, 80286 (last developed but never released)
- ARM1 (Original ARM development board. Rare as hens teeth :) / ARM7
(someone having a laugh in later years)

Typically the second processor would run as primary, using the
original 6502 to handle input, display and I/O (and on 32016 you
*really* wanted someone else to deal with anything time critical like
interrupts :)


"later years" is .. Today where we connect a Raspbery Pi to the BBC 
Micros Tube interface and emulate all those CPUs and several more like 
the PDP/11. One of the 6502 emulations runs at the equivalent of a 
275Mhz CPU...


So if you want a Z80 then emulate it - it runs CP/M just as well as any 
other CP/M system.


The original ARM2 is there too.

The current list:

  n   Processor - *FX 151,230,n
  0 * 65C02 (fast)
  1   65C02 (3MHz, for games compatbility)
  2   65C102 (fast)
  3   65C102 (4MHz, for games compatbility)
  4   Z80 (1.21)
  5   Z80 (2.00)
  6   Z80 (2.2c)
  7   Z80 (2.30)
  8   80286
  9   MC6809
11   PDP-11
12   ARM2
13   32016
14   Disable
15   ARM Native
16   LIB65C02 64K
17   LIB65C02 256K Turbo
18   65C816 (Dossy)
19   65C816 (ReCo)
20   OPC5LS
21   OPC6
22   OPC7
24   65C02 (JIT)
28   Ferranti F100-L

Cheers,

-Gordon


This would be great, but I live on the other side of the pond
and BBC anything is hard to find, let alone Micro's.
Where is my "Dr. Who".
Ben.




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-24 Thread Gordon Henderson via cctalk

On Wed, 24 Apr 2024, David Brownlee via cctalk wrote:


If we're talking about machines with a Z80 and 6502, it would be
remiss not to link back to the machine mentioned in the original
message - the BBC micro, with its onboard 6502 and "Tube" interface
which could take a second processor option, including
- Z80
- 65C02 / 65C102
- NS 16032 (ahem 32016)
- 8088 (Torch) / 80186, 80286 (last developed but never released)
- ARM1 (Original ARM development board. Rare as hens teeth :) / ARM7
(someone having a laugh in later years)

Typically the second processor would run as primary, using the
original 6502 to handle input, display and I/O (and on 32016 you
*really* wanted someone else to deal with anything time critical like
interrupts :)


"later years" is .. Today where we connect a Raspbery Pi to the BBC Micros 
Tube interface and emulate all those CPUs and several more like the 
PDP/11. One of the 6502 emulations runs at the equivalent of a 275Mhz 
CPU...


So if you want a Z80 then emulate it - it runs CP/M just as well as any 
other CP/M system.


The original ARM2 is there too.

The current list:

 n   Processor - *FX 151,230,n
 0 * 65C02 (fast)
 1   65C02 (3MHz, for games compatbility)
 2   65C102 (fast)
 3   65C102 (4MHz, for games compatbility)
 4   Z80 (1.21)
 5   Z80 (2.00)
 6   Z80 (2.2c)
 7   Z80 (2.30)
 8   80286
 9   MC6809
11   PDP-11
12   ARM2
13   32016
14   Disable
15   ARM Native
16   LIB65C02 64K
17   LIB65C02 256K Turbo
18   65C816 (Dossy)
19   65C816 (ReCo)
20   OPC5LS
21   OPC6
22   OPC7
24   65C02 (JIT)
28   Ferranti F100-L

Cheers,

-Gordon


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-24 Thread David Brownlee via cctalk
On Wed, 24 Apr 2024 at 01:18, Bill Gunshannon via cctalk
 wrote:
>
> On 4/23/2024 8:06 PM, Fred Cisin via cctalk wrote:
> > Did the Dimension 68000 (a multi-processor machine) have Z80 and 6502?
> >
>
> What about the Tandy 16 and 6000.  M68K and Z80.

If we're talking about machines with a Z80 and 6502, it would be
remiss not to link back to the machine mentioned in the original
message - the BBC micro, with its onboard 6502 and "Tube" interface
which could take a second processor option, including
- Z80
- 65C02 / 65C102
- NS 16032 (ahem 32016)
- 8088 (Torch) / 80186, 80286 (last developed but never released)
- ARM1 (Original ARM development board. Rare as hens teeth :) / ARM7
(someone having a laugh in later years)

Typically the second processor would run as primary, using the
original 6502 to handle input, display and I/O (and on 32016 you
*really* wanted someone else to deal with anything time critical like
interrupts :)

David


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-24 Thread Chuck Guzis via cctalk
On 4/24/24 11:34, Fred Cisin wrote:
 Did the Dimension 68000 (a multi-processor machine) have Z80 and 6502?
> 
> On Tue, 23 Apr 2024, Chuck Guzis via cctalk wrote:
>> Couldn't Bill Godbout's CPU-68K board co-exist with other CPU boards?
> 
> Did he, or anybody else, make an S100 6502 CPU board?

Early on, yes, there was at least one 6502 board, but I don't recall who
offered it--a couple of friends had them.  Perhaps SSM; I didn't find
them that interesting, given the lack of any sort of software support.

--Chuck


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-24 Thread Fred Cisin via cctalk

Did the Dimension 68000 (a multi-processor machine) have Z80 and 6502?


On Tue, 23 Apr 2024, Chuck Guzis via cctalk wrote:

Couldn't Bill Godbout's CPU-68K board co-exist with other CPU boards?


Did he, or anybody else, make an S100 6502 CPU board?


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-24 Thread Chuck Guzis via cctalk
On 4/24/24 10:54, Robert Feldman via cctalk wrote:
> The Otrona Attache 8:16 had a Z80A and an 8086 on a daughter card.

Of course, Godbout offered the S100 85/88 board in the same vein.

--CHuck



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-24 Thread Robert Feldman via cctalk
The Otrona Attache 8:16 had a Z80A and an 8086 on a daughter card.

Bob


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-24 Thread Chuck Guzis via cctalk
On 4/23/24 21:06, ben via cctalk wrote:

>>
> I remember Bill Godbout's PACE ads. Now I got the $$$ and time I can't
> find any chips.

National was handing the chips with manuals out for free at on WESCON--I
got mine there, built up an S100 board with all of the interface logic
(I think the PACE was originally PMOS) and never could really get the
thing working reliably. It wasn't clear to me, after all was said and
done that PACE offered any substantial improvement over the 8 bit chips.

A later version used standard TTL levels and should still be
findable--and a lot easier to put into a design.

One thing that NSI could be depended on for--absolute volatility in the
choice of design du jour.

--Chuck




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread ben via cctalk

On 2024-04-23 8:40 p.m., Chuck Guzis via cctalk wrote:

On 4/23/24 17:18, Bill Gunshannon via cctalk wrote:



On 4/23/2024 8:06 PM, Fred Cisin via cctalk wrote:

Did the Dimension 68000 (a multi-processor machine) have Z80 and 6502?


Couldn't Bill Godbout's CPU-68K board co-exist with other CPU boards?

--Chuck

I remember Bill Godbout's PACE ads. Now I got the $$$ and time I can't 
find any

chips.





[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread Chuck Guzis via cctalk
On 4/23/24 17:18, Bill Gunshannon via cctalk wrote:
> 
> 
> On 4/23/2024 8:06 PM, Fred Cisin via cctalk wrote:
>> Did the Dimension 68000 (a multi-processor machine) have Z80 and 6502?

Couldn't Bill Godbout's CPU-68K board co-exist with other CPU boards?

--Chuck




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread Fred Cisin via cctalk

On Tue, 23 Apr 2024, Van Snyder via cctalk wrote:

I had a "Magic Sac" thing-y that plugged into the ROM port of my Atari
1040. When I put a Mac ROM into its socket, I could run most Mac
programs that I had.


That was pretty cool
The developer of it said that when he met with Apple's lawyers, "Magic 
Sack" was as close to "Mac" as they would let him be.


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread Van Snyder via cctalk
On Tue, 2024-04-23 at 17:06 -0700, Fred Cisin via cctalk wrote:
> a significant portion (I remember at one time, somebody at Apple said 20%)
> > > of Apple users had the Microsoft SoftCard Z80, or imitations thereof.

I had a "Magic Sac" thing-y that plugged into the ROM port of my Atari
1040. When I put a Mac ROM into its socket, I could run most Mac
programs that I had.



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread Fred Cisin via cctalk

Did the Dimension 68000 (a multi-processor machine) have Z80 and 6502?


On Tue, 23 Apr 2024, Bill Gunshannon via cctalk wrote:

What about the Tandy 16 and 6000.  M68K and Z80.


Yes.
But the original comment that I was responding to was asking Z80 and 6502.

Cromemco also had a 68000 and Z80 machine.  a friend of a friend had one, 
and turned up his nose at the thought of a Tandy machine.



BTW, I found out about the Dimension 68000; it was rather expensive
https://en.wikipedia.org/wiki/Dimension_68000
68000, Z80, 8086


--
Grumpy Ol' Fred ci...@xenosoft.com


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread Bill Gunshannon via cctalk




On 4/23/2024 8:06 PM, Fred Cisin via cctalk wrote:

Did the Dimension 68000 (a multi-processor machine) have Z80 and 6502?



What about the Tandy 16 and 6000.  M68K and Z80.

bill



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread Fred Cisin via cctalk

Did the Dimension 68000 (a multi-processor machine) have Z80 and 6502?

Commodore 128 had Z80 and 6502


On Tue, 23 Apr 2024, Mike Katz wrote:

I think Ohio Scientific made a computer called the 3B or something like that 
that had a 6502, a Z-80 and a 6800 in it.  If my memory serves.


On 4/23/2024 7:00 PM, Fred Cisin via cctalk wrote:

On Tue, 23 Apr 2024, Van Snyder via cctalk wrote:

I shared an office with a lady who got a computer from Ohio Scientific
that had both a Z80 and a 6502. It also had two 5/25" floppy drives.
She also got a tee-shirt that said "I have two floppies." Except she
didn't.


aside from her floppies, . . .
a significant portion (I remember at one time, somebody at Apple said 20%)
of Apple users had the Microsoft SoftCard Z80, or imitations thereof.

At least one of the Apple imitations had both 6502 and Z80.

--
Grumpy Ol' Fred ci...@xenosoft.com

[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread Mike Katz via cctalk
I think Ohio Scientific made a computer called the 3B or something like 
that that had a 6502, a Z-80 and a 6800 in it.  If my memory serves.


On 4/23/2024 7:00 PM, Fred Cisin via cctalk wrote:

On Tue, 23 Apr 2024, Van Snyder via cctalk wrote:

I shared an office with a lady who got a computer from Ohio Scientific
that had both a Z80 and a 6502. It also had two 5/25" floppy drives.
She also got a tee-shirt that said "I have two floppies." Except she
didn't.


aside from her floppies, . . .
a significant portion (I remember at one time, somebody at Apple said 
20%)

of Apple users had the Microsoft SoftCard Z80, or imitations thereof.

At least one of the Apple imitations had both 6502 and Z80.

--
Grumpy Ol' Fred ci...@xenosoft.com





[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread Fred Cisin via cctalk

On Tue, 23 Apr 2024, Van Snyder via cctalk wrote:

I shared an office with a lady who got a computer from Ohio Scientific
that had both a Z80 and a 6502. It also had two 5/25" floppy drives.
She also got a tee-shirt that said "I have two floppies." Except she
didn't.


aside from her floppies, . . .
a significant portion (I remember at one time, somebody at Apple said 20%)
of Apple users had the Microsoft SoftCard Z80, or imitations thereof.

At least one of the Apple imitations had both 6502 and Z80.

--
Grumpy Ol' Fred ci...@xenosoft.com



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread Cameron Kaiser via cctalk
> I shared an office with a lady who got a computer from Ohio Scientific
> that had both a Z80 and a 6502.

The Commodore 128 says hi.

-- 
 personal: http://www.cameronkaiser.com/ --
  Cameron Kaiser * Floodgap Systems * www.floodgap.com * ckai...@floodgap.com
-- If you're too open-minded, your brains will fall out. --



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread Van Snyder via cctalk
On Tue, 2024-04-23 at 13:27 +0200, Peter Corlett via cctalk wrote:
> The Z80 takes three or four memory cycles to perform a memory access versus
> the 6502 accessing memory on every cycle,

I shared an office with a lady who got a computer from Ohio Scientific
that had both a Z80 and a 6502. It also had two 5/25" floppy drives.

She also got a tee-shirt that said "I have two floppies." Except she
didn't.



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread Chris Elmquist via cctalk
On Monday (04/22/2024 at 08:55PM -0700), Chuck Guzis wrote:
> On 4/22/24 20:36, Chris Elmquist wrote:
> > Hey, I did that on Sunday afternoons on the Star-100 with Lincoln and his 
> > son PD when I was in 8th grade.  I never became a manager though :-)
> > 
> > Chris
> 
> Trying to remember, was the star the same as the 6000 as far as wiring?
> That is, twisted pair and taper-pin?

I seem to remember (poorly) that it was very small diameter coax but I
don't remember the termination method.

I remember more about the Tek oscilloscope on a cart that we used than
what we were actually doing.

I don't recall that we actually cut and terminated any cables.
We were just measuring the skew and writing it down and then I think a
"professional" was going to come around later and actually adjust the
line length.  Entirely possible we were just doing busy work to stay out
of trouble and not going to be part of the actual solution.  But it was
still educational.

> Gad, that was what, 50 years ago? I remember hunkering between the SBUs
> at ADL with a pillow during the OPEC oil embargo, with my pillow from
> the Ramada and a book.  It was the warmest place in town...

Close. I think 48 yrs.  I first met PD Lincoln in 1976 when I moved into
the Mounds View School district.  Then I discovered his dad was doing
some pretty cool stuff at work...

I think they were working on the CY203 then too but there was still a
Star on the floor which is what we "played" with.

For those playing along, this was the CDC Arden Hills Operations (ARHOPS)
in St. Paul, MN.  The CDC Advanced Design Lab (ADL) was here and it's
where all the supercomputing hardware development took place after
Seymour left CDC.  So, they did the Star-100, CY203, and CY205
there and then in 1983 spun the entire ADL off into what became ETA
Systems and we did the LN2 cooled ETA10 there.

> This California boy wasn't used to Twin Cities winters...

Understood.  It's all I know except for recently experiencing summer
in Albuquerque so we might be 180 degrees (so to speak) out of phase on
that :-)

Chris
-- 
Chris Elmquist



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread Peter Corlett via cctalk
On Mon, Apr 22, 2024 at 01:06:42AM +0100, Peter Coghlan via cctalk wrote:
[...]
> This was implemented by a humble 6502 running at (mostly) 2MHz, with one 8
> bit arithmetic register, two 8 bit index registers, one 8 bit stack
> pointer, a 16 bit program counter and a few flag bits.

> I would have expected that a computers featuring a Z80 with its larger
> register set, 16 bit arithmetic operations, richer instruction set and
> general bells and whistles would have been able to produce a much superior
> implementation in terms of speed or features or both but I never came
> across one.

> Why is that? Did the Z80 take more cycles to implement it's more complex
> instructions? Is this an early example of RISC vs CISC?

Technically yes, but the implicit assumption in the question is wrong.

The Z80 takes three or four memory cycles to perform a memory access versus
the 6502 accessing memory on every cycle, but Z80 systems tend to be clocked
3-4 times faster so the memory bandwidth is pretty much the same. This
shouldn't be too surprising: they were designed to use the same RAM chips.

So the Z80 takes more cycles, but it was designed to use a faster clock and
do simpler operations per clock as that saved die space. Clock speeds have
*never* been a good way to compare CPUs.

In the hands of an expert in their respective instruction sets, both
architectures perform about as well as each other for a given memory
bandwidth (which was and still remains the limiting factor on CPUs without
caches). The 6502 could be said to "win" only in as much as the modern
drop-in replacement is the 14Mhz 65C02S, whereas the Z80's is the Z804C00xx
which tops out at 20MHz so is only equivalent to a ~5MHz 6502.

For the same reason, a 14MHz 65C02S will leave a 68000 (maximum 16.67Mhz) in
the dust, especially when working with byte-oriented data such as text where
the wider bus doesn't help. The 68000 takes four cycles to perform a memory
access, and inserts a couple of extra cycles of dead time for certain
addressing modes which require extra work in the address-generation
circuitry.

Even back in the day, it was noted that the Sinclair's ZX Spectrum with its
3.5MHz Z80 could outperform their later QL with its 7.5MHz 68008.



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chuck Guzis via cctalk
On 4/22/24 20:36, Chris Elmquist wrote:
> Hey, I did that on Sunday afternoons on the Star-100 with Lincoln and his son 
> PD when I was in 8th grade.  I never became a manager though :-)
> 
> Chris

Trying to remember, was the star the same as the 6000 as far as wiring?
That is, twisted pair and taper-pin?

Gad, that was what, 50 years ago? I remember hunkering between the SBUs
at ADL with a pillow during the OPEC oil embargo, with my pillow from
the Ramada and a book.  It was the warmest place in town...

This California boy wasn't used to Twin Cities winters...

--Chuck



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chris Elmquist via cctalk
Hey, I did that on Sunday afternoons on the Star-100 with Lincoln and his son 
PD when I was in 8th grade.  I never became a manager though :-)

Chris
--
Chris Elmquist

> On Apr 22, 2024, at 3:22 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 4/22/24 13:02, Wayne S wrote:
>> I read somewhere that the cable lengths were expressly engineered to provide 
>> that signals arrived to chips at nearly the same time so as to reduce chip 
>> “wait” times and provide more speed.
> 
> That certainly was true for the 6600.  My unit manager, fresh out of
> UofMinn had his first job with CDC, measuring wire loops on the first
> 6600 to which Seymour had attached tags that said "tune".
> 
> But then, take a gander at a modern notherboard and the lengths (sic) to
> which the designers have routed the traces so that timing works.
> 
> --Chuck
> 



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Jon Elson via cctalk

On 4/22/24 19:14, Bill Gunshannon via cctalk wrote:



On 4/22/2024 2:30 PM, Paul Koning wrote:



On Apr 22, 2024, at 2:09 PM, Bill Gunshannon via cctalk 
 wrote:




Following along this line of thought but also in regards 
all our

other small CPUs

Would it not be possible to use something like a Blue 
Pill to make
a small board (small enough to actually fit in the CPU 
socket) that
emulated these old CPUs?  Definitely enough horse power 
just wondered

if there was enough room for the microcode.


Microcode?


Well, that's what I would have called it.  :-)



It would bring an even more interesting concept to the 
table.  The
ability to add modifications to some of these chips to 
see just where
they might have gone.  While I don't mind the VAX, I 
always wondered
what the PDP-11 could have been if it had been developed 
instead.  :-)


bill


Of course the VAX started out as a modified PDP-11; the 
name makes that clear.  And I saw an early document of 
what became the VAX 11/780, labeled PDP-11/85.  Perhaps 
that was obfuscation.


I have never seen anything but the vaguest similarity to 
the PDP-11 in
the VAX.  I know it was called a VAX-11 early on but I 
never understood

why.

Umm, the VAX was a very obvious extension of the PDP-11 
instruction layout to 32 bits.  The PDP-11 had a 3 bit 
register address and 3 bit addressing mode.  On the VAX 
these were each extended to 4 bits.  On the 11, the opcode 
field was 4 bits, although more bits were available on unary 
instructions.  On the VAX, the opcode could be either 8 or 
16 bits.


Quoting from the VAX11/780 Hardware Handbook Preface 
"VAX-11/780 is DIGITAL's 32 bit extension to its 11 family 
of minicomputers." This is the first sentence in the book.


As somebody who programmed PDP-11s and VAXes in assembly 
language (Macro 11 and VAX Macro) I found the similarities 
VERY strong. Just that the 32-bit architecture took the 
constraints of the 16-bit PDP-11 away.


Jon





[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chuck Guzis via cctalk
On 4/22/24 17:35, Paul Koning wrote:

> What about the coincidence that a lot of today's logic runs on 3.3 volts, 
> just about the same as the first generation of IC logic (RTL).

I think I still have some survivors from the Motorola HEP mwRTL kit.
TO-100, I think.  RTL was pretty cool--slow, even by standards then, but
you could operate it in the linear region to make amplifiers and whatnot.

--Chuck



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 8:14 PM, Bill Gunshannon  
> wrote:
> 
> On 4/22/2024 2:30 PM, Paul Koning wrote:
>>> ...
>> Of course the VAX started out as a modified PDP-11; the name makes that 
>> clear.  And I saw an early document of what became the VAX 11/780, labeled 
>> PDP-11/85.  Perhaps that was obfuscation.
> 
> I have never seen anything but the vaguest similarity to the PDP-11 in
> the VAX.  I know it was called a VAX-11 early on but I never understood
> why.

Hm.  I thought it was pretty obvious.  The addressing modes are similar but a 
superset, it has similar registers, just twice as many and twice as big.  The 
instructions are similar but extended.  And the notation used to describe the 
instruction set was used earlier on the PDP-11.  For me as a PDP-11 assembly 
language programmer the kinship was obvious and the naming made perfect sense.

>> Anyway, I would think such a small microprocessor could emulate a PDP-11 
>> just fine, and probably fast enough.  The issue isn't so much the 
>> instruction set emulation but rather the electrical interface.  That's what 
>> would be needed to be a drop-in replacement.  Ignoring the voltage levels, 
>> there's the matter of implementing whatever the bus protocols are.
>> Possibly an RP2040 (the engine in the Raspberry Pico) would serve for this, 
>> with the PIO engines providing help with the low level signaling.  Sounds 
>> like a fun exercise for the student.
> 
> I wasn't thinking just the PDP-11.  I was thinking about the ability
> to replace failing CPU's of other flavors once production come to an
> end.  I suspect that is far enough in the future that I won't have to
> worry about it, but it sounded like an interesting project.
> 
> bill

It certainly would be.  And if you needed to replace a failed F-11 or single 
chip PDP-8, it might be useful now.

paul




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 7:03 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 4/22/24 14:34, dwight via cctalk wrote:
> 
>> For those that don't know what a UV(UX)201 was, it was most commonly used 
>> for audio amplification in early battery powered radios. These used a lot of 
>> filament current, not like later miniature tubes.
>> They had a UV(UX)200 tube for RF detections that worked better as a grid 
>> leak detector, I think because of less cutoff voltage needed as a detector.
>> The A series used a better getter and lower current filament ( one or both? 
>> ) but still used a lot of filament current.
> 
> I've long considered it to be an interesting coincidence that the
> filament voltage of the UV201 was 5V, just like much later TTL logic.

What about the coincidence that a lot of today's logic runs on 3.3 volts, just 
about the same as the first generation of IC logic (RTL).

> Folks don't recall that RCA was formed to get around a patent issue on
> the basic idea of a triode.

Interesting!

paul



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Bill Gunshannon via cctalk




On 4/22/2024 2:30 PM, Paul Koning wrote:




On Apr 22, 2024, at 2:09 PM, Bill Gunshannon via cctalk  
wrote:



Following along this line of thought but also in regards all our
other small CPUs

Would it not be possible to use something like a Blue Pill to make
a small board (small enough to actually fit in the CPU socket) that
emulated these old CPUs?  Definitely enough horse power just wondered
if there was enough room for the microcode.


Microcode?


Well, that's what I would have called it.  :-)




It would bring an even more interesting concept to the table.  The
ability to add modifications to some of these chips to see just where
they might have gone.  While I don't mind the VAX, I always wondered
what the PDP-11 could have been if it had been developed instead.  :-)

bill


Of course the VAX started out as a modified PDP-11; the name makes that clear.  
And I saw an early document of what became the VAX 11/780, labeled PDP-11/85.  
Perhaps that was obfuscation.


I have never seen anything but the vaguest similarity to the PDP-11 in
the VAX.  I know it was called a VAX-11 early on but I never understood
why.



Anyway, I would think such a small microprocessor could emulate a PDP-11 just 
fine, and probably fast enough.  The issue isn't so much the instruction set 
emulation but rather the electrical interface.  That's what would be needed to 
be a drop-in replacement.  Ignoring the voltage levels, there's the matter of 
implementing whatever the bus protocols are.

Possibly an RP2040 (the engine in the Raspberry Pico) would serve for this, 
with the PIO engines providing help with the low level signaling.  Sounds like 
a fun exercise for the student.


I wasn't thinking just the PDP-11.  I was thinking about the ability
to replace failing CPU's of other flavors once production come to an
end.  I suspect that is far enough in the future that I won't have to
worry about it, but it sounded like an interesting project.

bill



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Jon Elson via cctalk

On 4/22/24 16:06, Paul Berger via cctalk wrote:


On 2024-04-22 5:21 p.m., Chuck Guzis via cctalk wrote:

On 4/22/24 13:02, Wayne S wrote:
I read somewhere that the cable lengths were expressly 
engineered to provide that signals arrived to chips at 
nearly the same time so as to reduce chip “wait” times 
and provide more speed.
That certainly was true for the 6600.  My unit manager, 
fresh out of
UofMinn had his first job with CDC, measuring wire loops 
on the first

6600 to which Seymour had attached tags that said "tune".

But then, take a gander at a modern notherboard and the 
lengths (sic) to
which the designers have routed the traces so that timing 
works.


--Chuck


Shortly after I started at IBM I assisted one of the 
senior CEs doing engineering changes on a 3031 and the 
back of the logic gates was a mass of what IBM called 
tri-lead, when I saw it I wonder how it could possibly 
work.  The tri-lead is basically a 3 wire ribbon cable 
that has the two outer wires grounded and is precisely 
made to have reliable characteristics.  It was explained 
to me that sometimes they would change the length of the 
tri-lead in a connection to adjust signal timing.


I am not sure when IBM started using tri-lead


IBM 360's used Chabin TLC (transmission line cable) that 
were essentially a ribbon cable version of tri-lead.  18 
signals wide, terminating in the standard 24-pin connectors 
just like the SLT cards had.  I think that the Tri-lead and 
TLC both had a 91 Ohm impedance.  The reason for splitting 
the ribbons into individual signals was to reduce 
crosstalk.  Interesting note, 370's version of ECL used a 
+1.25 V and -3V power supply that shifted the logic levels 
to +400 mV and -400 mV, and were terminated to ground.  If 
you wanted to scope a signal, you could unplug a tri-lead 
and connect it to a scope with a 91 Ohm terminator.


Jon





[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chuck Guzis via cctalk
On 4/22/24 14:34, dwight via cctalk wrote:

> For those that don't know what a UV(UX)201 was, it was most commonly used for 
> audio amplification in early battery powered radios. These used a lot of 
> filament current, not like later miniature tubes.
> They had a UV(UX)200 tube for RF detections that worked better as a grid leak 
> detector, I think because of less cutoff voltage needed as a detector.
> The A series used a better getter and lower current filament ( one or both? ) 
> but still used a lot of filament current.

I've long considered it to be an interesting coincidence that the
filament voltage of the UV201 was 5V, just like much later TTL logic.

Folks don't recall that RCA was formed to get around a patent issue on
the basic idea of a triode.

--Chuck



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Mike Katz via cctalk
Well, it was beyond the PC's and Sparc stations we had access to at the 
time.


On 4/22/2024 3:28 PM, Christian Kennedy via cctalk wrote:


On 4/22/24 13:12, Fred Cisin via cctalk wrote:

On Mon, 22 Apr 2024, Mike Katz via cctalk wrote:
[Big snip -- hopefully I managed to get attribution right, apologies 
in advance if I borked it]


When I was working for a 6800 C compiler company we could simulate 
all 68000 CPUs before the 68020. The 68020 with it's pipelining and 
branch prediction made it impossible to do cycle accurate timing.


Again, not impossible, but very likely not feasable.


Having done cycle accurate simulation of a pipelined superscalar 
processor, I can assure you it's possible, particularly with hardware 
assist a la Quickturn .


It's also a lot like watching paint dry.





[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread dwight via cctalk




From: ben via cctalk 
Sent: Monday, April 22, 2024 12:43 PM
To: cctalk@classiccmp.org 
Subject: [cctalk] Re: Z80 vs other microprocessors of the time.

On 2024-04-22 1:02 p.m., Chuck Guzis via cctalk wrote:

> I'd like to see a Z80 implemented with UV-201 vacuum tubes... :)
> --Chuck

Real computers use glow tubes like the NE-2 or the NE-77.:)

For those that don't know what a UV(UX)201 was, it was most commonly used for 
audio amplification in early battery powered radios. These used a lot of 
filament current, not like later miniature tubes.
They had a UV(UX)200 tube for RF detections that worked better as a grid leak 
detector, I think because of less cutoff voltage needed as a detector.
The A series used a better getter and lower current filament ( one or both? ) 
but still used a lot of filament current.
Dwight






[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chuck Guzis via cctalk
On 4/22/24 14:04, Paul Koning wrote:

> I never had my hands on a 6600, only a 6400 which is a single unit machine.  
> So I had to do some thinking to understand why someone would do a register 
> transfer with L (shift operation) rather than B (boolean operation) when I 
> first saw that in my code reading.  The answer is that both instructions take 
> 300 ns, but they are in different functional units on the 6600 so they can 
> start 100 ns apart.

Jack Neuhaus taught the timing course at SVLOPS, at least to the SSD
crowd.  It got to be fun after awhile. We tried to do the same for the
STAR, but the timing was pretty complex and not a good subject for
pencil-and-paper work.   There, the biggest performance gains there came
from vectorization.

Of course, if you had ECS, large block memory moves were easy.  Lower
CYBER (e.g. 73) with CMU might have also benefited from its use; I don't
recall if it was used for storage move early on.  I do recall that the
standard test for CMU presence packing a jump in the lower 30 bits of a
ginned-up CMU instruction word was broken by the Cyber 170.

--Chuck





[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk
They sure do now, but not back in 1964.  :-)

paul

> On Apr 22, 2024, at 5:13 PM, Mike Katz via cctalk  
> wrote:
> 
> Compilers do that with what is called loop rotation optimization.
> 
> On 4/22/2024 3:59 PM, Chuck Guzis via cctalk wrote:
>> On 4/22/24 13:53, Paul Koning via cctalk wrote:
>>> In COMPASS:
>>> 
>>> MORESA1 A1+B2   (B2 = 2)
>>> SA2 A2+B2
>>> BX6 X1
>>> LX7 X2
>>> SB3 B3-2
>>> SA6 A6+B2
>>> SA7 A7+B2
>>> PL  b3,MORE
>> My recollection is that putting the stores at the top of the loop and
>> the loads at the bottom managed to save a few cycles.  Of course, you
>> have to prime the loop...
>> 
>> --Chuck
>> 
> 



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Mike Katz via cctalk

True, if 1 cycle per second or minute is an acceptable emulation speed.

For that kind of emulation to work the emulator needs to be fed the same 
tasks in the same order and for the same core.  This is even more true 
when the CPU is waiting on internal or external resources.  If you can 
emulate all of the cores, all 3 (or more) levels of cache, all possible 
branch prediction, all possible out of order execution and all possible 
external influences then yes you can emulation anything.  But that would 
be like using one of today's petaflop hyper computers to emulate an ARM 
9 running at the speed of a Z-80 or even slower.


On 4/22/2024 2:57 PM, Paul Koning wrote:



On Apr 22, 2024, at 3:45 PM, Mike Katz  wrote:

Cycle accurate emulation becomes impossible in the following circumstances:
• Branch prediction and pipelining can cause out of order execution and 
the execution path become data dependent. ...

I disagree.  Clearly a logic model will do cycle accurate simulation.  So an 
abstraction of that which still preserves the details of out of order 
execution, data dependency, etc., will also be cycle accurate.

It certainly is true that modern high performance processors with all those 
complexities are hard to simulate, but not impossible.

paul






[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Mike Katz via cctalk

Compilers do that with what is called loop rotation optimization.

On 4/22/2024 3:59 PM, Chuck Guzis via cctalk wrote:

On 4/22/24 13:53, Paul Koning via cctalk wrote:

In COMPASS:

MORESA1 A1+B2   (B2 = 2)
SA2 A2+B2
BX6 X1
LX7 X2
SB3 B3-2
SA6 A6+B2
SA7 A7+B2
PL  b3,MORE

My recollection is that putting the stores at the top of the loop and
the loads at the bottom managed to save a few cycles.  Of course, you
have to prime the loop...

--Chuck





[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Berger via cctalk



On 2024-04-22 5:21 p.m., Chuck Guzis via cctalk wrote:

On 4/22/24 13:02, Wayne S wrote:

I read somewhere that the cable lengths were expressly engineered to provide 
that signals arrived to chips at nearly the same time so as to reduce chip 
“wait” times and provide more speed.

That certainly was true for the 6600.  My unit manager, fresh out of
UofMinn had his first job with CDC, measuring wire loops on the first
6600 to which Seymour had attached tags that said "tune".

But then, take a gander at a modern notherboard and the lengths (sic) to
which the designers have routed the traces so that timing works.

--Chuck


Shortly after I started at IBM I assisted one of the senior CEs doing 
engineering changes on a 3031 and the back of the logic gates was a mass 
of what IBM called tri-lead, when I saw it I wonder how it could 
possibly work.  The tri-lead is basically a 3 wire ribbon cable that has 
the two outer wires grounded and is precisely made to have reliable 
characteristics.  It was explained to me that sometimes they would 
change the length of the tri-lead in a connection to adjust signal timing.


I am not sure when IBM started using tri-lead but I also recall seeing a 
168 that had some third party memory that was in a box hung on the end 
of a frame and had a large bundle of tri-lead coming out of it that 
disappeared under the covers.  The site CE told me that those tri-leads 
where all plugged into specific locations on the back of the boards on 
one of the card gates, and if they had a problem with memory, they would 
call in the techs that looked after the third party memory and have them 
disconnect it all.   The last system I saw with tri-lead was a 3081 most 
of the logic was in TCMs but the memory was on a separate card gate and 
connected to teh main CPU board with tri-lead.


Paul.



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 4:59 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 4/22/24 13:53, Paul Koning via cctalk wrote:
>> In COMPASS:
>> 
>> MORE SA1 A1+B2   (B2 = 2)
>>  SA2 A2+B2
>>  BX6 X1
>>  LX7 X2
>>  SB3 B3-2
>>  SA6 A6+B2
>>  SA7 A7+B2
>>  PL  b3,MORE
> 
> My recollection is that putting the stores at the top of the loop and
> the loads at the bottom managed to save a few cycles.  Of course, you
> have to prime the loop...
> 
> --Chuck

Might well be, I don't remember.  Or moving the SB3 (the loop counter) to be 
right after the loads is probably helpful.  The full answer depends on 
understanding the timing, both of the instructions and of the memory references 
that are set in motion by them. 

I never had my hands on a 6600, only a 6400 which is a single unit machine.  So 
I had to do some thinking to understand why someone would do a register 
transfer with L (shift operation) rather than B (boolean operation) when I 
first saw that in my code reading.  The answer is that both instructions take 
300 ns, but they are in different functional units on the 6600 so they can 
start 100 ns apart.

paul



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chuck Guzis via cctalk
On 4/22/24 13:53, Paul Koning via cctalk wrote:
> In COMPASS:
> 
> MORE  SA1 A1+B2   (B2 = 2)
>   SA2 A2+B2
>   BX6 X1
>   LX7 X2
>   SB3 B3-2
>   SA6 A6+B2
>   SA7 A7+B2
>   PL  b3,MORE

My recollection is that putting the stores at the top of the loop and
the loads at the bottom managed to save a few cycles.  Of course, you
have to prime the loop...

--Chuck



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chuck Guzis via cctalk
On 4/22/24 13:53, Paul Koning via cctalk wrote:
> In COMPASS:
> 
> MORE  SA1 A1+B2   (B2 = 2)
>   SA2 A2+B2
>   BX6 X1
>   LX7 X2
>   SB3 B3-2
>   SA6 A6+B2
>   SA7 A7+B2
>   PL  b3,MORE



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 3:31 PM, ben via cctalk  wrote:
> 
> 
> >One other factor is that RISC machines rely on simple operations >carefully 
> >arranged by optimizing compilers (or, in some cases, >skillful programmers). 
> > A multi-step operation can be encoded in a >sequence of RISC operations run 
> >through an optimizing scheduler more >effectively than the equivalent 
> >sequence of steps inside the
> >micro-engine of a CISC processor.
> 
> Lets call them LOAD/STORE architectures.
> 
> Classic cpu designs like the PDP-1, might be better called RISC.

Um, no.  Load-store machines like the PDP-1, or many other machines of that 
era, have instructions where typically one operand is a register and the other 
is a memory location.  That means arithmetic operations necessarily include a 
memory reference, implying a memory wait.

A key part of RISC is arithmetic on registers only, and enough registers so you 
can schedule the loads and stores to run concurrently with other arithmetic 
operations.  The CDC 6600 is the pioneering example.  A very simple scenario 
would be a memory move loop, where you'd issue two loads to two different 
registers, then two register-register move operations that use different 
functional units, followed by two store operations from two different 
registers.  (The move operations because the 6600 would do loads to one set of 
registers and stores from a different set.)  Keeping two memory operations in 
flight concurrently made quite a difference.

In COMPASS:

MORESA1 A1+B2   (B2 = 2)
SA2 A2+B2
BX6 X1
LX7 X2
SB3 B3-2
SA6 A6+B2
SA7 A7+B2
PL  b3,MORE

paul

[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Ethan Dicks via cctalk
On Mon, Apr 22, 2024 at 2:30 PM Paul Koning via cctalk
 wrote:
> Anyway, I would think such a small microprocessor could emulate a PDP-11 just 
> fine, and probably fast enough.  The issue isn't so much the instruction set 
> emulation but rather the electrical interface.  That's what would be needed 
> to be a drop-in replacement.  Ignoring the voltage levels, there's the matter 
> of implementing whatever the bus protocols are.

Emulating an F-11 chip or a J-11 chip is certainly possible with a
modern MCU, just need TTL-friendly I/O.  F-11 is 40-pins (and can have
additional instructions added by adding microcode ROMs next to the
CPU) and the J-11 is 64 pins on a fat chip carrier.

> Possibly an RP2040 (the engine in the Raspberry Pico) would serve for this, 
> with the PIO engines providing help with the low level signaling.  Sounds 
> like a fun exercise for the student.

Could be a good start but would still need level shifters.

J-11 runs at 15-18Mhz for an idea on how fast the bus implementation has to be.

-ethan


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 4:21 PM, Mike Katz via cctalk  
> wrote:
> 
> Once CPUs became faster than memory the faster the memory the faster the CPU 
> could run.
> 
> That is where CACHE came in.  Expensive small high speed ram chips would be 
> able to feed the CPU faster except in case of a cache miss and then the cache 
> had to reload from slow memory.  That is why multiple cache buffers were 
> implemented so one could be filling (predicatively) while another buffer was 
> being used.

An early cache, though not called that, is the track buffer in the ARMAC, a 
1955 or so research computer built at CWI (then called MC) in Amsterdam.  Its 
main memory was a drum, like its predecessors, but it would keep the most 
recently accessed track in memory (core?) for fast access.  That was handled in 
hardware if I remember right, so it's exactly like a one-entry cache with a 
line size of whatever the track length is (32 words?).

paul




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 4:24 PM, Mike Katz via cctalk  
> wrote:
> 
>> Again, not impossible, but very likely not feasable. 
> 
> Well not possible with the hardware available at the time.
> 
> If one cycle per minute or less is acceptable then I guess it was possible.
> 
> That is why we used in circuit emulators to do cycle accurate counting on 
> more complex machines.  This machines were clunky and unreliable but they 
> worked for the most part.

Well, the SB-1 is a multi-core pipelined machine with multiple caches and all 
sorts of other complications.  And the company certainly had a cycle accurate 
simulator.  They were reluctant to let it out to customers, but we leaned hard 
enough.  It was slow, indeed.  Certainly not a cycle per minute; I'm pretty 
sure it was a whole lot more than a cycle per second.  Given that the code I 
was debugging was only a few hundred instructions long, it was quite acceptable.

Speaking of slow emulation: a CDC 6600 gate level model, in VHDL, is indeed 
slow.  Now we're indeed talking about a cycle per second.  I'm thinking I could 
translate the VHDL to C and have it go faster (by omitting irrelevant detail 
that VHDL carees about but the simulation doesn't).

paul



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 4:21 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 4/22/24 13:02, Wayne S wrote:
>> I read somewhere that the cable lengths were expressly engineered to provide 
>> that signals arrived to chips at nearly the same time so as to reduce chip 
>> “wait” times and provide more speed. 
> 
> That certainly was true for the 6600.  My unit manager, fresh out of
> UofMinn had his first job with CDC, measuring wire loops on the first
> 6600 to which Seymour had attached tags that said "tune".

Not so much "to arrive at the same time" but rather "to arrive at the correct 
time".  And not so much to reduce chip wait times, because for the most part 
that machine doesn't wait for things.  Instead, it relies on predictable 
timing, so that an action set in motion is known to deliver its result at a 
specific later time, and when that signal arrives there will be some element 
accepting it right then.

A nice example is the exchange jump instruction processing, which fires off a 
bunch of memory read/restore operations and sends off current register values 
across the various memory buses.  The memory read completes and sends off the 
result, then 100 ns or so later the register value shows up and is inserted 
into the write data path of the memory to complete the core memory full cycle.  
(So it isn't a read/restore actually, but raher a "read/replace".)

Another example is the PPU "barrel" which books like Thornton's show as a 
passive thing except at the "slot" where the arithmetic lives.  In reality, 
about 6 positions before the slot the current memory address (PC or current 
operand) is handed off to the memory so that just before that PP rotates to the 
slot the read data will be available to it.  And then the output of the slot 
becomes the restore, or update, data for the write part of the memory full 
cycle.

> But then, take a gander at a modern notherboard and the lengths (sic) to
> which the designers have routed the traces so that timing works.

Indeed, and with multi-Gb/s interfaces this stuff really matters.  Enough so 
that high end processors document the wire lengths inside the package, so that 
"match interconnect lengths" doesn't mean "match etch lengths" but rather 
"match etch plus in-package lengths".

The mind boggles at the high end FPGAs with dozens of interfaces running at 
data rates up to 58 Gb/s.

paul



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Fred Cisin via cctalk

Again, not impossible, but very likely not feasable.


On Mon, 22 Apr 2024, Mike Katz wrote:

Well not possible with the hardware available at the time.


Some stuff is getting faster, . . . 
Can you estimate how much faster it would need to be?


(perhaps then, log(2) of that, times 18 months? :-)
'course, with Moore no longer around, who's gonna enforce his law? :-)


--
Grumpy Ol' Fred ci...@xenosoft.com


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Christian Kennedy via cctalk



On 4/22/24 13:12, Fred Cisin via cctalk wrote:

On Mon, 22 Apr 2024, Mike Katz via cctalk wrote:
[Big snip -- hopefully I managed to get attribution right, apologies in 
advance if I borked it]


When I was working for a 6800 C compiler company we could simulate 
all 68000 CPUs before the 68020.  The 68020 with it's pipelining and 
branch prediction made it impossible to do cycle accurate timing.


Again, not impossible, but very likely not feasable.


Having done cycle accurate simulation of a pipelined superscalar 
processor, I can assure you it's possible, particularly with hardware 
assist a la Quickturn .


It's also a lot like watching paint dry.

--
Christian Kennedy, Ph.D.
ch...@mainecoon.com AF6AP | DB0692 | PG00029419
http://www.mainecoon.comPGP KeyID 108DAB97
PGP fingerprint: 4E99 10B6 7253 B048 6685 6CBC 55E1 20A3 108D AB97
"Mr. McKittrick, after careful consideration…"



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Mike Katz via cctalk
Again, not impossible, but very likely not feasable. 


Well not possible with the hardware available at the time.

If one cycle per minute or less is acceptable then I guess it was possible.

That is why we used in circuit emulators to do cycle accurate counting 
on more complex machines.  This machines were clunky and unreliable but 
they worked for the most part.



On 4/22/2024 3:12 PM, Fred Cisin via cctalk wrote:

On Mon, 22 Apr 2024, Mike Katz via cctalk wrote:
Cycle accurate emulation becomes impossible in the following 
circumstances:

* Branch prediction and pipelining can cause out of order execution
  and the execution path become data dependent.
* Cache memory.  It can be very difficult to predict a cache flush or
  cache miss or cache look aside buffer hit
* Memory management can inject wait states and cause other cycle
  counting issues
* Peripherals can inject unpredictable wait states
* Multi-core processors because you don't necessarily know what core
  is doing what and possibly one core waiting on another core.
* DMA can cause some CPUs to pause because the bus is busy doing DMA
  transfers (not all processors have this as an issue).
* Some CPUs shut down clocks and peripherals if they are not used and
  they take time to re-start.
* Any code that waits for some kind of external input.


Ridiculously impractical, but not impossible.
All of those things could be calculated, and worked around.
Admittedly, we might not have a machine fast enough to do so.
Whereas, emulation that doesn't need to do those can be done with 
systems not extremely faster than the one being emulated.


When I was working for a 6800 C compiler company we could simulate 
all 68000 CPUs before the 68020.  The 68020 with it's pipelining and 
branch prediction made it impossible to do cycle accurate timing.


Again, not impossible, but very likely not feasable.

--
Grumpy Ol' Fred ci...@xenosoft.com




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Mike Katz via cctalk
Once CPUs became faster than memory the faster the memory the faster the 
CPU could run.


That is where CACHE came in.  Expensive small high speed ram chips would 
be able to feed the CPU faster except in case of a cache miss and then 
the cache had to reload from slow memory.  That is why multiple cache 
buffers were implemented so one could be filling (predicatively) while 
another buffer was being used.


Some early CPU's were run slowly enough so that the memory could keep up 
and some had built in hardware handshaking.  For example the 68000 had a 
signal called DTACK which was used by the memory/peripheral to say that 
it had latched the data on the bus (on writes) or that the data is 
stable on the bus (on reads).


Or used quadrature clocks (like the 6809 [the 6809E ran a 2 phase 
non-quadrature clock]) that gave memory more than one cycle time to respond.


On 4/22/2024 3:02 PM, Wayne S via cctalk wrote:

I read somewhere that the cable lengths were expressly engineered to provide 
that signals arrived to chips at nearly the same time so as to reduce chip 
“wait” times and provide more speed.

So that begs a question. Older chips like the Z80 and 8080 lines required other 
support chips that added latency to a system waiting for the support chips to 
“settle”.  Does that imply that newer microprocessors that have support on the 
chip are just generally faster because of that?


Sent from my iPhone


On Apr 22, 2024, at 12:54, Chuck Guzis via cctalk  wrote:

On 4/22/24 12:31, ben via cctalk wrote:

Classic cpu designs like the PDP-1, might be better called RISC.
Back then you matched the cpu word length to data you were using.
40 bits made a lot of sense for real computing, even if you
had no RAM memory at the time, just drum.

I'd call the CDC 6600 a classic RISC design, at least as far as the CPU
went. Classes were given to programming staff on timing code precisely;
I spent many happy hours trying to squeeze the last few cycles out of a
loop (where the biggest bang for the buck was possible).

I think bitsavers (I haven't looked) has a document or two on how to
time code for that thing.

--Chuck






[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chuck Guzis via cctalk
On 4/22/24 13:02, Wayne S wrote:
> I read somewhere that the cable lengths were expressly engineered to provide 
> that signals arrived to chips at nearly the same time so as to reduce chip 
> “wait” times and provide more speed. 

That certainly was true for the 6600.  My unit manager, fresh out of
UofMinn had his first job with CDC, measuring wire loops on the first
6600 to which Seymour had attached tags that said "tune".

But then, take a gander at a modern notherboard and the lengths (sic) to
which the designers have routed the traces so that timing works.

--Chuck




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Fred Cisin via cctalk

On Mon, 22 Apr 2024, Mike Katz via cctalk wrote:

Cycle accurate emulation becomes impossible in the following circumstances:
* Branch prediction and pipelining can cause out of order execution
  and the execution path become data dependent.
* Cache memory.  It can be very difficult to predict a cache flush or
  cache miss or cache look aside buffer hit
* Memory management can inject wait states and cause other cycle
  counting issues
* Peripherals can inject unpredictable wait states
* Multi-core processors because you don't necessarily know what core
  is doing what and possibly one core waiting on another core.
* DMA can cause some CPUs to pause because the bus is busy doing DMA
  transfers (not all processors have this as an issue).
* Some CPUs shut down clocks and peripherals if they are not used and
  they take time to re-start.
* Any code that waits for some kind of external input.


Ridiculously impractical, but not impossible.
All of those things could be calculated, and worked around.
Admittedly, we might not have a machine fast enough to do so.
Whereas, emulation that doesn't need to do those can be done with systems 
not extremely faster than the one being emulated.


When I was working for a 6800 C compiler company we could simulate all 68000 
CPUs before the 68020.  The 68020 with it's pipelining and branch prediction 
made it impossible to do cycle accurate timing.


Again, not impossible, but very likely not feasable.

--
Grumpy Ol' Fred ci...@xenosoft.com

[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Mike Katz via cctalk

Cycle accurate emulation becomes impossible in the following circumstances:

 * Branch prediction and pipelining can cause out of order execution
   and the execution path become data dependent.
 * Cache memory.  It can be very difficult to predict a cache flush or
   cache miss or cache look aside buffer hit
 * Memory management can inject wait states and cause other cycle
   counting issues
 * Peripherals can inject unpredictable wait states
 * Multi-core processors because you don't necessarily know what core
   is doing what and possibly one core waiting on another core.
 * DMA can cause some CPUs to pause because the bus is busy doing DMA
   transfers (not all processors have this as an issue).
 * Some CPUs shut down clocks and peripherals if they are not used and
   they take time to re-start.
 * Any code that waits for some kind of external input.

When I was working for a 6800 C compiler company we could simulate all 
68000 CPUs before the 68020.  The 68020 with it's pipelining and branch 
prediction made it impossible to do cycle accurate timing.



On 4/22/2024 1:46 PM, Paul Koning via cctalk wrote:



On Apr 22, 2024, at 2:34 PM, Chuck Guzis via cctalk  
wrote:

On 4/22/24 11:09, Bill Gunshannon via cctalk wrote:


Following along this line of thought but also in regards all our
other small CPUs

Would it not be possible to use something like a Blue Pill to make
a small board (small enough to actually fit in the CPU socket) that
emulated these old CPUs?  Definitely enough horse power just wondered
if there was enough room for the microcode.

Blue pills are so yesterday!  There are far more small-footprint MCUs
out there.   More RAM than any Z80 ever had as well as lots of flash for
the code as well as pipelined 32-bit execution at eye-watering (relative
to the Z80) speeds.

Could it emulate a Z80?  I don't see any insurmountable obstacles to
that.  Could it be cycle- and timing- accurate?   That's a harder one to
predict, but probably.

Probably not.  Cycle accurate simulation is very hard.  It's only rarely been 
done for any CPU, and if done it tends to be incredibly slow.  I remember once 
using a MIPS cycle-accurate simulator (for the SB-1, the core inside the 
SB-1250, later called BCM-12500).  It was needed because the L2 cache flush 
code could not be debugged any other way, but it was very slow indeed.  Almost 
as bad as running the CPU logic model in a Verilog or VHDL simulator.  I don't 
remember the numbers but it probably was only a few thousand instructions per 
second.

Then again, for the notion of a drop-in replacement for the original chip, you 
don't need a cycle accurate simulator, just one with compatible pin signalling. 
 That's not nearly so hard -- though still harder than a SIMH style ISA 
simulation.

paul




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Wayne S via cctalk
I read somewhere that the cable lengths were expressly engineered to provide 
that signals arrived to chips at nearly the same time so as to reduce chip 
“wait” times and provide more speed. 

So that begs a question. Older chips like the Z80 and 8080 lines required other 
support chips that added latency to a system waiting for the support chips to 
“settle”.  Does that imply that newer microprocessors that have support on the 
chip are just generally faster because of that?


Sent from my iPhone

> On Apr 22, 2024, at 12:54, Chuck Guzis via cctalk  
> wrote:
> 
> On 4/22/24 12:31, ben via cctalk wrote:
>> 
> 
>> 
>> Classic cpu designs like the PDP-1, might be better called RISC.
>> Back then you matched the cpu word length to data you were using.
>> 40 bits made a lot of sense for real computing, even if you
>> had no RAM memory at the time, just drum.
> 
> I'd call the CDC 6600 a classic RISC design, at least as far as the CPU
> went. Classes were given to programming staff on timing code precisely;
> I spent many happy hours trying to squeeze the last few cycles out of a
> loop (where the biggest bang for the buck was possible).
> 
> I think bitsavers (I haven't looked) has a document or two on how to
> time code for that thing.
> 
> --Chuck
> 
> 


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 3:45 PM, Mike Katz  wrote:
> 
> Cycle accurate emulation becomes impossible in the following circumstances:
>   • Branch prediction and pipelining can cause out of order execution and 
> the execution path become data dependent. ...

I disagree.  Clearly a logic model will do cycle accurate simulation.  So an 
abstraction of that which still preserves the details of out of order 
execution, data dependency, etc., will also be cycle accurate.

It certainly is true that modern high performance processors with all those 
complexities are hard to simulate, but not impossible.

paul




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chuck Guzis via cctalk
On 4/22/24 12:31, ben via cctalk wrote:
> 

> 
> Classic cpu designs like the PDP-1, might be better called RISC.
> Back then you matched the cpu word length to data you were using.
> 40 bits made a lot of sense for real computing, even if you
> had no RAM memory at the time, just drum.

I'd call the CDC 6600 a classic RISC design, at least as far as the CPU
went. Classes were given to programming staff on timing code precisely;
I spent many happy hours trying to squeeze the last few cycles out of a
loop (where the biggest bang for the buck was possible).

I think bitsavers (I haven't looked) has a document or two on how to
time code for that thing.

--Chuck




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Fred Cisin via cctalk

On 2024-04-22 1:02 p.m., Chuck Guzis via cctalk wrote:

I'd like to see a Z80 implemented with UV-201 vacuum tubes... :) --Chuck


On Mon, 22 Apr 2024, ben via cctalk wrote:

Real computers use glow tubes like the NE-2 or the NE-77.:)


I thought that real computers use gears




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread ben via cctalk

On 2024-04-22 1:02 p.m., Chuck Guzis via cctalk wrote:

I'd like to see a Z80 implemented with UV-201 vacuum tubes... :) 
--Chuck


Real computers use glow tubes like the NE-2 or the NE-77.:)







[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread ben via cctalk



>One other factor is that RISC machines rely on simple operations 
>carefully arranged by optimizing compilers (or, in some cases, 
>skillful programmers).  A multi-step operation can be encoded in a 
>sequence of RISC operations run through an optimizing scheduler more 
>effectively than the equivalent sequence of steps inside the

>micro-engine of a CISC processor.

Lets call them LOAD/STORE architectures.

Classic cpu designs like the PDP-1, might be better called RISC.
Back then you matched the cpu word length to data you were using.
40 bits made a lot of sense for real computing, even if you
had no RAM memory at the time, just drum.

IBM set the standard for 8 bit bytes, 16, 32 bit words and 64 bit
floating point. Things are complex because you need to pack things to
fit the standard size boxes. Every thing is trade off.
Why? Because the IBM 7030 Stretch (64 bits) was a flop.

Save memory, CISC.
Use memory,  RISC.
Simple memory, Microprocessors.

Processor development, is always built around what memory you have
around at the time, is my argument.

How many Z80's can you think of USE core memory?
I think only 1 8080A ever used core memory, from BYTE magazine.

Improvements in memory often where improvements in logic as well
for CPU design.

If CPU's were designed for high level languages, why are there
no stack based architectures around like for Pascal's P-code?
(1970's yes, but not today)

The Z80 may be gone, but the 8080 still can be emulated by
bitslices. Did anyone ever use them?

Ben.










[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chuck Guzis via cctalk
On 4/22/24 11:46, Paul Koning wrote:
>
> Probably not.  Cycle accurate simulation is very hard.  It's only rarely been 
> done for any CPU, and if done it tends to be incredibly slow.  I remember 
> once using a MIPS cycle-accurate simulator (for the SB-1, the core inside the 
> SB-1250, later called BCM-12500).  It was needed because the L2 cache flush 
> code could not be debugged any other way, but it was very slow indeed.  
> Almost as bad as running the CPU logic model in a Verilog or VHDL simulator.  
> I don't remember the numbers but it probably was only a few thousand 
> instructions per second.

Then again, the Z80 isn't a very sophisticated chip.  No cache,
pipelining, speculative execution, etc.  A cheap 32 bit MCU, running at
400 MHz might be able to pull it off pretty well.

But then again, there are FPGA cores for the Z80, etc.

I'd like to see a Z80 implemented with UV-201 vacuum tubes... :)

--Chuck




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 2:34 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 4/22/24 11:09, Bill Gunshannon via cctalk wrote:
> 
>> 
>> Following along this line of thought but also in regards all our
>> other small CPUs
>> 
>> Would it not be possible to use something like a Blue Pill to make
>> a small board (small enough to actually fit in the CPU socket) that
>> emulated these old CPUs?  Definitely enough horse power just wondered
>> if there was enough room for the microcode.
> 
> Blue pills are so yesterday!  There are far more small-footprint MCUs
> out there.   More RAM than any Z80 ever had as well as lots of flash for
> the code as well as pipelined 32-bit execution at eye-watering (relative
> to the Z80) speeds.
> 
> Could it emulate a Z80?  I don't see any insurmountable obstacles to
> that.  Could it be cycle- and timing- accurate?   That's a harder one to
> predict, but probably.

Probably not.  Cycle accurate simulation is very hard.  It's only rarely been 
done for any CPU, and if done it tends to be incredibly slow.  I remember once 
using a MIPS cycle-accurate simulator (for the SB-1, the core inside the 
SB-1250, later called BCM-12500).  It was needed because the L2 cache flush 
code could not be debugged any other way, but it was very slow indeed.  Almost 
as bad as running the CPU logic model in a Verilog or VHDL simulator.  I don't 
remember the numbers but it probably was only a few thousand instructions per 
second.

Then again, for the notion of a drop-in replacement for the original chip, you 
don't need a cycle accurate simulator, just one with compatible pin signalling. 
 That's not nearly so hard -- though still harder than a SIMH style ISA 
simulation.

paul




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Lamar Owen via cctalk

On 4/22/24 14:09, Bill Gunshannon via cctalk wrote:

Would it not be possible to use something like a Blue Pill to make
a small board (small enough to actually fit in the CPU socket) that
emulated these old CPUs?  Definitely enough horse power just wondered
if there was enough room for the microcode.
Microcore Labs has done this using a Teensy plus a small adapter board; 
see 
https://microcorelabs.wordpress.com/2022/05/11/mclz8-zilog-z80-emulator-in-trs-80-model-iii/ 
(Github repo: https://github.com/MicroCoreLabs/Projects/tree/master/MCLZ8 ).


There are others there as well.




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chuck Guzis via cctalk
On 4/22/24 11:09, Bill Gunshannon via cctalk wrote:

> 
> Following along this line of thought but also in regards all our
> other small CPUs
> 
> Would it not be possible to use something like a Blue Pill to make
> a small board (small enough to actually fit in the CPU socket) that
> emulated these old CPUs?  Definitely enough horse power just wondered
> if there was enough room for the microcode.

Blue pills are so yesterday!  There are far more small-footprint MCUs
out there.   More RAM than any Z80 ever had as well as lots of flash for
the code as well as pipelined 32-bit execution at eye-watering (relative
to the Z80) speeds.

Could it emulate a Z80?  I don't see any insurmountable obstacles to
that.  Could it be cycle- and timing- accurate?   That's a harder one to
predict, but probably.

But I'd wonder what the point was.  There are still lots of Z80s out
there in captivity.

--Chuck




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 2:09 PM, Bill Gunshannon via cctalk 
>  wrote:
> 
> 
> 
> Following along this line of thought but also in regards all our
> other small CPUs
> 
> Would it not be possible to use something like a Blue Pill to make
> a small board (small enough to actually fit in the CPU socket) that
> emulated these old CPUs?  Definitely enough horse power just wondered
> if there was enough room for the microcode.

Microcode?

> It would bring an even more interesting concept to the table.  The
> ability to add modifications to some of these chips to see just where
> they might have gone.  While I don't mind the VAX, I always wondered
> what the PDP-11 could have been if it had been developed instead.  :-)
> 
> bill

Of course the VAX started out as a modified PDP-11; the name makes that clear.  
And I saw an early document of what became the VAX 11/780, labeled PDP-11/85.  
Perhaps that was obfuscation.

Anyway, I would think such a small microprocessor could emulate a PDP-11 just 
fine, and probably fast enough.  The issue isn't so much the instruction set 
emulation but rather the electrical interface.  That's what would be needed to 
be a drop-in replacement.  Ignoring the voltage levels, there's the matter of 
implementing whatever the bus protocols are.  

Possibly an RP2040 (the engine in the Raspberry Pico) would serve for this, 
with the PIO engines providing help with the low level signaling.  Sounds like 
a fun exercise for the student. 

paul



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Bill Gunshannon via cctalk




Following along this line of thought but also in regards all our
other small CPUs

Would it not be possible to use something like a Blue Pill to make
a small board (small enough to actually fit in the CPU socket) that
emulated these old CPUs?  Definitely enough horse power just wondered
if there was enough room for the microcode.

It would bring an even more interesting concept to the table.  The
ability to add modifications to some of these chips to see just where
they might have gone.  While I don't mind the VAX, I always wondered
what the PDP-11 could have been if it had been developed instead.  :-)

bill


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chuck Guzis via cctalk
A bit of a postscript:  The ALU on the 8085 according to Ken is 8 bits wide.

https://www.righto.com/2013/01/inside-alu-of-8085-microprocessor.html

--Chuck


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chuck Guzis via cctalk
On 4/22/24 09:54, Lamar Owen via cctalk wrote:
> On 4/22/24 12:18, Chuck Guzis via cctalk wrote:
>> I don't know if this applies to the Z80, but on the 8080, 16-bit
>> increment/decrement is handled by a separate increment block (also used
>> to advance the P-counter and stack operations).  Probably one of the
>> reasons that INX/DCX doesn't set any flags.
> 16-bit INC and DEC are indeed handled by a separate block, which also
> gets used to increment PC and decrement SP at the appropriate times. 
> Ken's page on the 4-bit ALU has a 'mapped' dieshot showing it. Ken
> covers it operation in the blog article
> https://www.righto.com/2013/11/the-z-80s-16-bit-incrementdecrement.html

Interesting document. Ken also documents the K and V flags in the 8085.
I was aware of the use of the K flag with the 16-bit INX/DCX operations
as an over/underflow indicator for 16-bit inc-decrements.  He also
points out that it comes into play with the usual  8-bit ALU operations.

https://www.righto.com/2013/02/looking-at-silicon-to-understanding.html

One question that I've long had is if the K flag is set in case of a
"wraparound" situation with the PC or stack pointer.  I have never
checked that (and 8085 programming is in my dimming past), but I suspect
that it does indeed get set, since the "wrap" of those registers would
be pretty uncommon in normal-functioning code.

Managing underflow with a 16-bit decrement is certainly useful and can
cut out a few instructions to test that situation.  However, the K flag
being restricted to the 8085 pretty much limits its use when viewed in
the universe of other x80 CPUs.

Somewhat akin to the packed BCD string instructions on the NEC V-series
CPUs.  Useful?  You bet--but not implemented on any of the strict Intel
hardware so left to molder away in a dusty corner.

--Chuck


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Lamar Owen via cctalk

On 4/22/24 12:18, Chuck Guzis via cctalk wrote:

I don't know if this applies to the Z80, but on the 8080, 16-bit
increment/decrement is handled by a separate increment block (also used
to advance the P-counter and stack operations).  Probably one of the
reasons that INX/DCX doesn't set any flags.
16-bit INC and DEC are indeed handled by a separate block, which also 
gets used to increment PC and decrement SP at the appropriate times.  
Ken's page on the 4-bit ALU has a 'mapped' dieshot showing it. Ken 
covers it operation in the blog article 
https://www.righto.com/2013/11/the-z-80s-16-bit-incrementdecrement.html


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Glen Slick via cctalk
On Mon, Apr 22, 2024 at 9:18 AM Chuck Guzis via cctalk
 wrote:
>
> On 4/22/24 08:36, Lamar Owen via cctalk wrote:
>
> > Die real estate forced the design to do without a full 8-bit ALU. When
> > you have a 4-bit ALU, and you are doing 16-bit math, you will need 4
> > cycles through the ALU.
>
> I don't know if this applies to the Z80, but on the 8080, 16-bit
> increment/decrement is handled by a separate increment block (also used
> to advance the P-counter and stack operations).  Probably one of the
> reasons that INX/DCX doesn't set any flags.

All sorts of interesting details are covered in several of Ken
Shirriff's blog posts.

Here are a few:

The Z-80 has a 4-bit ALU. Here's how it works.
https://www.righto.com/2013/09/the-z-80-has-4-bit-alu-heres-how-it.html

Reverse-engineering the Z-80: the silicon for two interesting gates explained
https://www.righto.com/2013/09/understanding-z-80-processor-one-gate.html

The Z-80's 16-bit increment/decrement circuit reverse engineered
https://www.righto.com/2013/11/the-z-80s-16-bit-incrementdecrement.html

Why the Z-80's data pins are scrambled
https://www.righto.com/2014/09/why-z-80s-data-pins-are-scrambled.html


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Chuck Guzis via cctalk
On 4/22/24 08:36, Lamar Owen via cctalk wrote:

> Die real estate forced the design to do without a full 8-bit ALU. When
> you have a 4-bit ALU, and you are doing 16-bit math, you will need 4
> cycles through the ALU.

I don't know if this applies to the Z80, but on the 8080, 16-bit
increment/decrement is handled by a separate increment block (also used
to advance the P-counter and stack operations).  Probably one of the
reasons that INX/DCX doesn't set any flags.

--Chuck




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread William Donzelli via cctalk
> The Z80 is dead; long live the Z80.

They said that about the UX-201A...and every year hundreds or
thousands of new ones show up.

--
Will


[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Lamar Owen via cctalk

On 4/21/24 20:06, Peter Coghlan via cctalk wrote:

Why is that?  Did the Z80 take more cycles to implement it's more complex
instructions?  Is this an early example of RISC vs CISC?

Z80 is blessed with a 4-bit ALU, verified by reverse engineering dieshots (
https://www.righto.com/2013/09/the-z-80-has-4-bit-alu-heres-how-it.html ).

Die real estate forced the design to do without a full 8-bit ALU. When 
you have a 4-bit ALU, and you are doing 16-bit math, you will need 4 
cycles through the ALU.


Neither 6502 nor Z80 are RISC.  6502 simply runs very efficiently thanks 
to the design decisions made; 8 bit ALU, 8 bit registers for everything, 
including stack.  Math is fast.  Biphase clocking allows what could be 
considered a precursor of double-data-rate designs. The Visual6502 
project shows the 'tick-tock' of it very well.


If you want to see what can be done with sufficient real estate on the 
chip and using more modern design methodologies, pick up a copy of Monte 
Dalrymple's book "Microprocessor Design using Verilog HDL' ( 
https://www.elektor.com/products/microprocessor-design-using-verilog-hdl-e-book 
)in either ebook or paperback form (I bought both); Monte was 
responsible for Z380 among other designs, and they are very efficient.  
Today's eZ80 builds on that, and is as efficient as the 6502 and much, 
much faster.


There are many softcores out there, so Z80 lives on in both those as 
well as the Z180 (several parts are EoL (either last-time-buy or 
obsolete) but some parts are still Active) and the eZ80 (most 
instructions on eZ80 are single-cycle, and the chip is pipelined; eZ80 
has a 24-bit ALU; plus, eZ80 is a very capable microcontroller).  The 
Z84015, in both 6 and 10MHz variants, still shows as Active at Digikey.  
ALL are of course surface mount. Through-hole anything is a dying breed.


The Z80 is dead; long live the Z80.



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 21, 2024, at 9:17 PM, Will Cooke via cctalk  
> wrote:
> 
> 
> 
>> On 04/21/2024 7:06 PM CDT Peter Coghlan via cctalk  
>> wrote:
>> 
>> 
>> 
>> Why is that? Did the Z80 take more cycles to implement it's more complex
>> instructions? Is this an early example of RISC vs CISC?
>> 
>> Regards,
>> Peter Coghlan
> 
> I'm certainly no authority, but I have programmed both processors in assembly 
> and studied them somewhat.  It took many years for me to believe that the 
> 6502 was "faster" than the Z80, but now I'm (mostly) a believer.  So here is 
> my take.
> 
> First, yes, the Z80 takes roughly 4 times as many clock cycles per 
> instruction.  Where the 6502 can complete a simple instruction in a single 
> clock, the Z80 takes a minimum of four.
> 
> The 02 certainly has a simpler architecture, but calling it a RISC machine 
> would probably make the RISC believers cringe.  It is simple, but it doesn't 
> follow the pattern of lots of registers (well, maybe) and a load/store 
> architecture.  But that may be its strongest point.  The zero page 
> instructions effectively make the first 256 bytes of RAM into a large (128 or 
> 256) register file.
> ...

Cycles per instruction is one aspect of RISC vs. CISC but there are more, and 
cycles per instruction may not be the most significant one.

Given enough silicon and enough brainpower thrown at the problem, CISC machines 
can be made to run very fast.  Consider modern x86 machines for example.  But 
the key point is "given enough silicon...".  

I think the significance of RISC isn't so much in cycles per instruction but 
rather in simplicity of implementation (for a given level of performance).  
It's not just single cycle instructions.  In RISC architectures it is often 
easier to achieve pipelining and parallelism.  Consider what's arguably the 
first example, the CDC 6600 with its parallelism, and its sibling the 7600 
which made the rather obvious addition of pipelining.

Simplicity of implementation means either lower cost for a given level of 
performance, or higher achievable performance for a given level of technology, 
or lower power per unit of performance, or easier power management 
optimization, or any combination of the above.  Consider ARM machines vs. x86.  
It's not so much that ARM machines go faster but that they do so on a fraction 
of the power, and that they require only a small amount of silicon to do so.

One other factor is that RISC machines rely on simple operations carefully 
arranged by optimizing compilers (or, in some cases, skillfull programmers).  A 
multi-step operation can be encoded in a sequence of RISC operations run 
through an optimizing scheduler more effectively than the equivalent sequence 
of steps inside the micro-engine of a CISC processor.

paul




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-21 Thread Will Cooke via cctalk



> On 04/21/2024 7:06 PM CDT Peter Coghlan via cctalk  
> wrote:
>
>
>
> Why is that? Did the Z80 take more cycles to implement it's more complex
> instructions? Is this an early example of RISC vs CISC?
>
> Regards,
> Peter Coghlan

I'm certainly no authority, but I have programmed both processors in assembly 
and studied them somewhat.  It took many years for me to believe that the 6502 
was "faster" than the Z80, but now I'm (mostly) a believer.  So here is my take.

First, yes, the Z80 takes roughly 4 times as many clock cycles per instruction. 
 Where the 6502 can complete a simple instruction in a single clock, the Z80 
takes a minimum of four.

The 02 certainly has a simpler architecture, but calling it a RISC machine 
would probably make the RISC believers cringe.  It is simple, but it doesn't 
follow the pattern of lots of registers (well, maybe) and a load/store 
architecture.  But that may be its strongest point.  The zero page instructions 
effectively make the first 256 bytes of RAM into a large (128 or 256) register 
file.

Along with all those pseudo-registers in page zero, the 02 has some really nice 
addressing modes.  In effect, all those pseudo registers can be used as index 
registers in addition to directly holding operands.  The simple, fast 
instructions operating on 8 bit registers runs fast.  In the Z80, there are a 
fair number of registers, but most are limited to what they can be used for.  
You almost always have to go through the accumulator (register A.)  So you end 
up moving stuff between memory and various registers, often shuffling stuff 
around once loaded, then store it back to memory.  The Z80 has the IX and IY 
index registers, but they are even slower, adding another machine cycle (4 
clocks?) to an already slowish instruction just for the fetch, then another 
memory cycle.  If you have to load the operand then store the result, that 
doubles the extra time needed.

So all of that leads to faster assembly language on the 02.  Good 6502 
programmers (I'm NOT one of them) know tons of tricks to get the most out of 
it, too.  People spent years learning the ins and outs of that particular 
processor.  I think the Z80 didn't get that kind of love, at least not as much. 
 Most Z80 machines were running CP/M and most didn't have the graphics and 
sound that made the 02 machines no nice for home computers and games.  In 
addition, an awful lot of Z80 code/programmers were part time, moving to and 
from the 8080 which was really a different machine.

As a rough approximation I would say that a Z80 would require somewhere between 
4 to 8 times the clock for equivalent assembly language performance.  No doubt 
others will have other opinions.

However, the Z80 was probably more likable by the computer science people.  And 
it was a LOT easier to write a halfway decent compiler for.  It didn't need as 
many "tricks" to make it perform.  If you look at compiled code for the two, 
you will usually either find severe limitations on the 02 or very slow code.  
Especially if you are looking at any "modern" language (Algol family, such as C 
or Pascal.)  A BASIC (or perhaps even Fortran) compiler that doesn't have all 
the local variables and nested structure will usually fare better.

Anyway, that's my 1/2 cent worth.  Take it for what its worth.

Will


Grownups never understand anything by themselves and it is tiresome for 
children to be always and forever explaining things to them,

Antoine de Saint-Exupery in The Little Prince