dataflow (was: RTX-2000 processor PC/AT add-in card (any takers?))

2017-04-11 Thread Jecel Assumpcao Jr. via cctalk
Chuck Guzis via cctalk wrote on Tue, 11 Apr 2017 18:05:01 -0700
> On 04/11/2017 04:53 PM, Jecel Assumpcao Jr. via cctalk wrote:
> 
> > I consider the heart of any modern high performance CPU to be a
> > dataflow architecture (described as an "out of order execution
> > engine") with a hardware to translate the macrocode (CISC or RISC) to
> > the dataflow graph and tokens on the fly.
> I wouldn't characterize an out-of-order execution scheduler as
> "dataflow", at least not in the traditional sense.

I have never seen anybody else, including people whose research in the
late 1980s was dataflow architectures, do so either. But I see an engine
with 24 "in flight" instructions plus all the register renaming circuits
and it sure looks the same to me.

> Certainly, nobody that I was aware of ever categorized, say, a CDC 6600
> as a dataflow machine.

I was not aware that there had been any out of order implementations
after the IBM ACS until the second half of the 1990s. Given Cray's
passion for simplicity, I would not expect any of his designs to use
o-o-o (specially one as early as the CDC 6600).

> At least not in the same sense that I'd categorize a NEC uPD7281 as a
> dataflow device.

That is the one I am most familiar with, along with the Manchester
Dataflow Machine and the MIT Tagged Token machine. An interesting modern
dataflow architecture is the TRIPS:

https://en.wikipedia.org/wiki/TRIPS_architecture

-- Jecel


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-11 Thread Chuck Guzis via cctalk
On 04/11/2017 04:53 PM, Jecel Assumpcao Jr. via cctalk wrote:

> I consider the heart of any modern high performance CPU to be a
> dataflow architecture (described as an "out of order execution
> engine") with a hardware to translate the macrocode (CISC or RISC) to
> the dataflow graph and tokens on the fly.
I wouldn't characterize an out-of-order execution scheduler as
"dataflow", at least not in the traditional sense.

Certainly, nobody that I was aware of ever categorized, say, a CDC 6600
as a dataflow machine.

At least not in the same sense that I'd categorize a NEC uPD7281 as a
dataflow device.

--Chuck









Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-11 Thread Chuck Guzis via cctalk
On 04/11/2017 04:47 PM, Eric Smith wrote:

> Apparently there was little concern for either Fortran or COBOL, the 
> most widely used programming languages at the time.


So FORTRAN/Fortran and COBOL are still with us and the 432 is dust.
There's a lesson there somewhere...

--Chuck


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-11 Thread Jecel Assumpcao Jr. via cctalk
Noel Chiappa via cctalk wrote on Tue, 11 Apr 2017 10:18:00 -0400 (EDT)
> > From: Sean Conner
> 
> > I really think it's for *this* reason (the handler() example) that C
> > doesn't allow nested functions.
> 
> I wouldn't be sure of that; I would tend to think that nested functions were
> left out simply because they add complexity, and didn't add enough value to
> outweigh that complexity. (In ~40 years of programming in C, I have never
> missed them.)

When block based languages evolved into modular languages (Ada,
Modula-2) they added a system with two levels: public and private
declarations. C got the same job done with its header files and separate
compilation and eventually was able to enforce that with "static"
function declarations.

If you have these two levels you will rarely (if ever) need extra ones.

-- Jecel


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-11 Thread Jecel Assumpcao Jr. via cctalk
Chuck Guzis via cctalk wrote on Tue, 11 Apr 2017 09:37:27 -0700
> On 04/10/2017 02:23 PM, Eric Smith wrote:
> 
> > When the 432 project (originally 8800) started, there weren't many
> > people predicting that C (and its derivatives) would take over the world.
> 
> That's the danger of a too-aggressive CISC, isn't it?  I suppose that
> it's safe to say that if you look under the hood of any modern CPU,
> there's a RISC machine in there somewhere.

I consider the heart of any modern high performance CPU to be a dataflow
architecture (described as an   "out of order execution engine") with a
hardware to translate the macrocode (CISC or RISC) to the dataflow graph
and tokens on the fly.

-- Jecel


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-11 Thread Eric Smith via cctalk
On Apr 11, 2017 11:29 AM, "Chuck Guzis via cctalk" 
wrote:
> This has me wondering about how the 432 people implemented FORTRAN.

Oh, there's a very simple answer to that. They didn't!

Early in the 8800/432 development (which started in 1975), Intel was
developing their own language for it, generally in the Algol family. It's
possible that they intended to support other languages, but Fortran
definitely would have been a poor fit.

When Ada came along, they decided that it was a reasonably good fit, and
with the DoD pushing Ada, that would be an easier sell to customers than a
proprietary language. Intel marketing basically claimed that the 432 was
designed for Ada, though that wasn't really the case.

The only two programming languages Intel supported on the 432 were:

1) Ada, using a cross-compiler written in Pascal and hosted on a VAX, to
run on "real" 432 systems such as the 432/670

2) Object Programming Language (OPL), a Smalltalk dialect based on Rosetta
Smalltalk, which only ran on the 432/100 demo board, a Multibus board
inserted in a slot of an Intel MDS decelopment system.

Late in the 432 timeline there was an unsupported port of XPL, but it did
not generate native code.

Apparently there was little concern for either Fortran or COBOL, the most
widely used programming languages at the time.


Re: The iAPX 432 and block languages (was Re: RTX-2000 processor PC/AT add-in card (any takers?))

2017-04-11 Thread sieler_allegro via cctalk
Eric writes:

The 432 architects went on to design a RISC processor that eliminated most
of the drawbacks of the 432, but still supported object-oriented
addressing, type safety, and memory safety, but using 33-bit word with one
bit being the tag to differentiate Access Descriptors from data. This
became the BiiN machine, which was unsuccessful.

And we come full circle.  One of the BiiN designers, John VanZandt (may have 
been from Intel)
cut his teeth on the Burrough B6700 at UCSD (tags, descriptors, stack),
and was one of the original implementors of UCSD Pascal.
At school, he roomed with a FORTH/LISP/APL implementor (me).

Small world, sometimes :)

Stan




Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-11 Thread Chuck Guzis via cctalk
On 04/11/2017 10:05 AM, Paul Koning wrote:

> 
> Back then it would have seemed a reasonable assumption that high
> level, strongly typed, languages would continue to flourish.  If you
> assume Algol or Pascal or Ada, a machine like the 432 (or like the
> Burroughs 5500 and its descendants) makes perfect sense.

This has me wondering about how the 432 people implemented FORTRAN.
Between parameter-passing-by-reference, EQUIVALENCE and COMMON, one can
be pretty cavalier about data types and addressing.   Yet most FORTRANs
of the time did not implement pointers.

--Chuck


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-11 Thread Paul Koning via cctalk

> On Apr 11, 2017, at 12:37 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 04/10/2017 02:23 PM, Eric Smith wrote:
> 
>> When the 432 project (originally 8800) started, there weren't many
>> people predicting that C (and its derivatives) would take over the world.
> 
> That's the danger of a too-aggressive CISC, isn't it?  I suppose that
> it's safe to say that if you look under the hood of any modern CPU,
> there's a RISC machine in there somewhere.

Back then it would have seemed a reasonable assumption that high level, 
strongly typed, languages would continue to flourish.  If you assume Algol or 
Pascal or Ada, a machine like the 432 (or like the Burroughs 5500 and its 
descendants) makes perfect sense.

I don't think this is exactly a question of RISC vs. CISC, but rather a 
question of how you believe addressing is done.  For example, the EL-X8 is a 
one address machine with a regular instruction layout, which makes it somewhat 
RISC like in structure.  But it has addressing modes clearly designed for 
efficient handling of block structured recursive languages like Algol.

paul



Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-11 Thread Chuck Guzis via cctalk
On 04/10/2017 02:23 PM, Eric Smith wrote:

> When the 432 project (originally 8800) started, there weren't many
> people predicting that C (and its derivatives) would take over the world.

That's the danger of a too-aggressive CISC, isn't it?  I suppose that
it's safe to say that if you look under the hood of any modern CPU,
there's a RISC machine in there somewhere.

--Chuck



Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-11 Thread Guy Sotomayor Jr via cctalk

> On Apr 10, 2017, at 11:18 PM, Lars Brinkhoff via cctalk 
>  wrote:
> 
> Chuck Guzis wrote:
>> That is a bit of a surprise--in my experience it takes very little
>> code to support Forth on any processor--that someone would build a
>> dedicated chip for it is unusual.
> 
> There are actually quite a few Forth processors.  Charles Moore himself
> designed half a dozen or so.  The RTX-2000 series is a descentant of his
> Novix chip.  Check out GreenArrays for his latest work.
> 
> There are also some FPGA designs out there.  The J1 seems somewhat
> popular.

Yes, I’m using the J1 in some my projects.  I even wrote a J1 emulator
(in Forth of course) but it has some limitations.  I’m in the process
of re-writing it in C so I can do some multi-threaded stuff and better
simulate asynchronous I/O.

TTFN - Guy

Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-11 Thread Noel Chiappa via cctalk
> From: Sean Conner

> I really think it's for *this* reason (the handler() example) that C
> doesn't allow nested functions.

I wouldn't be sure of that; I would tend to think that nested functions were
left out simply because they add complexity, and didn't add enough value to
outweigh that complexity. (In ~40 years of programming in C, I have never
missed them.)

C seems (well, until the standards committees got ahold of it) to have added
things as a demonstrated need was felt for them (see DMR's evolution of C
paper), and maybe they just never found a need for nested function
definitions?

I suspect that Ken probably knows; he's not (AFAIK) on the Unix History list
(TUHS), but several of his early co-workers (including Stephen Johnson, who
did PCC) are, and could relay a question to him, if it were asked over there
(if we really want to know).

Noel


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-11 Thread Tapley, Mark via cctalk
Two of them went past Pluto in 2015, inside the LORRI and PEPSSI 
instruments on New Horizons, running (of course) flight software in FORTH. At 
least one more was aboard MESSENGER at Mercury, in the MASCS instrument.
That is a processor architecture with legs… :-)
See pp. 17 of:

http://www.boulder.swri.edu/pkb/ssr/ssr-lorri.pdf

- Mark
210-522-6025 office 
210-379-4635cell

On Apr 10, 2017, at 6:54 PM, dwight via cctalk  wrote:

> Harris made the RTX-2000 in a rad hardened form so they were commonly used 
> for satellites.



Re: The iAPX 432 and block languages (was Re: RTX-2000 processor PC/AT add-in card (any takers?))

2017-04-11 Thread Eric Smith via cctalk
On Mon, Apr 10, 2017 at 3:39 PM, Sean Conner  wrote:

>   What about C made it difficult for the [Intel iAPX] 432 to run?
>

The iAPX 432 was a capability based architecture; the only kind of pointer
supported by the hardware was an Access Descriptor, which is a pointer to
an object (or a refinement, which is a subset of an object).  There is no
efficient way to do any kind of pointer arithmetic, even with refinements.

In the Release 1 and 2 architectures, objects were either Access Objects,
which could contain Access Descriptors (pointers to objects), or Data
Objects, whcih could NOT contain Access Descriptors. As a result,
architectural objects were often used in pairs, with the Access Object
having an Access Descriptor at a specific offset (generally 0) pointing to
the corresponding Data Object.

In the Release 3 architecture, a single object could have both an Access
Part and a Data Part, with basically the same restriction: the Access Part
can only store Access Descriptors, and the Data Part can NOT store Access
Descriptors.

As a consequence, a C pointer to a structure containing both pointer and
non-pointer data would have to be represented as a composite of:
   1)  an Access Descriptor to the containing object
   2)  an offset into the data object or data part, for the non-pointer
data, and the non-Access-Descriptor portion of any pointers
   3)  an offset into the access object or access part, for the Access
Descriptor portion of any pointers
The architecture provides no assistance for managing this sort of pointer;
the compiler would just have to emit all the necessary code.

However, C requires that it be possible to cast other data types into
pointers. The 432 can easily enough let you read an access descriptor as
data, but it will not allow you to write data to an access descriptor. That
will raise an exception. It would take really awful hacks in the operating
system to subvert that, and would be insanely slow. (On a machine that was
already quite slow under normal conditions.)  You can't even cast an Access
Descriptor (which occupies 32 bits of memory) to uint32_t, then cast it
back unmodified, e.g., to store a pointer into an intptr_t then put it back
in a pointer.

It would almost certainly be more efficient to implement C on the 432 by
simply allocating a single large array of bytes as the memory for the C
world, and implementing pointers only as offsets within that C world.  This
would preclude all access from C code to normal 432 objects, except by
calling native libraries through hand-written glue. It would effectively be
halfway to an abstract C machine; the compiler could emit a subset of
normal 432 machine instructions that operate on the data.

Note that the 432 segment size is limited to 64KB. Accessing an array
larger than that, such as the proposed C world, is expensive. You have to
have an array of access descriptors to data objects of 64KB (or some other
power of 2) each. Release 1 and 2 provide no architectural support for it,
so the machine code would have to take C virtual addresses and split them
into the object index and offset.  Release 3 provides an instruction for
indexing a large array in this fashion; IIRC the individual data objects
comprising the array are 2KB each.

  -spc (Curious here, as some aspects of the 432 made their way to the 286
> and we all know what happened to that architecture ... )
>

The only siginificant aspect of the 432 that made it into the 286 was the
use of 64KB segments, and that had already been done (badly) in the 8086.

The 432 architects went on to design a RISC processor that eliminated most
of the drawbacks of the 432, but still supported object-oriented
addressing, type safety, and memory safety, but using 33-bit word with one
bit being the tag to differentiate Access Descriptors from data. This
became the BiiN machine, which was unsuccessful. With the tag bit and
object-oriented instructions removed, it became the i960; the tag bit and
object-oriented instructions were later offered as the i960MX. The military
used the i960MX, but it is unclear whether they actually made use of the
tagging.

Eric


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-11 Thread Lars Brinkhoff via cctalk
Chuck Guzis wrote:
> That is a bit of a surprise--in my experience it takes very little
> code to support Forth on any processor--that someone would build a
> dedicated chip for it is unusual.

There are actually quite a few Forth processors.  Charles Moore himself
designed half a dozen or so.  The RTX-2000 series is a descentant of his
Novix chip.  Check out GreenArrays for his latest work.

There are also some FPGA designs out there.  The J1 seems somewhat
popular.


Re: The iAPX 432 and block languages (was Re: RTX-2000 processor PC/AT add-in card (any takers?))

2017-04-10 Thread Sean Conner via cctalk
It was thus said that the Great Jecel Assumpcao Jr. via cctalk once stated:
> Sean Conner via cctalk wrote on Mon, 10 Apr 2017 17:39:57 -0400
> >   What about C made it difficult for the 432 to run?  
> > 
> >   -spc (Curious here, as some aspects of the 432 made their way to the 286
> > and we all know what happened to that architecture ... )
> 
> C expects memory addresses to look like integers and for it to be easy
> to convert between the two. If your architecture uses a pair of numbers
> or an even more complicated scheme then you won't be able to have a
> proper C but only one or more less than satisfactory approximations.

  Just because a ton of C code was written with that assumption doesn't make
it actually true.  A lot of C code assumes a byte-addressable, two's
compliment architecture but C (technically Standard C) doesn't require
either and goes out of its way to warn programmers *not* to make such
assumptions.

  The C Standard is very careful to note what is and isn't allowed with
respect to memory and much of what is done is technically illegal and
anything can happen.  

> The iAPX432 and 286 used logical segments. So there is no sequence of
> increment or decrement operations that will get you from a byte in one
> segment to a byte in another segment. For the 8086 that is sometimes
> true but can be false if the "segments" (they should really be called
> relocation registers instead) overlap.

  Given:

p1 = malloc(10);
p2 = malloc(65536);

  There is no legal way to increment *or* decrement one to get to the other. 
It's not even guarenteed that p2 > p1.

> Another feature of C is that it doesn't take types too seriously when
> dealing with pointers. This means that a pointer to an integer array and
> a pointer to a function can be mixed up in some ways. 

  This is an issue, but mostly with K C (which had even less type checking
than ANSI C).  These days a compiler will warn if you try to pass a function
even with *no* cranking of the warning levels.

  Yes, C has issues, but please try not to make ones up for modern C.

  But if the point was, back in the day (1982) that this *was* an issue,
then yes, I would agree (to a point).  But I would bet that had the 432 been
successful, a C compiler would have been produced for it.

> If an application
> has been written like that then the best way to run it on an
> architectures like these Intel ones is to set all segments to the same
> memory region and never change them during execution. This is sometimes
> called the "tiny memory model".
> 
> https://en.wikipedia.org/wiki/Intel_Memory_Model
> 
> Most applications keep function pointers separate from other kinds of
> pointers and in this case you can set the code segment to a different
> area than the data and stack for a total of 128KB of memory (compared to
> just 64KB for the tiny memory model).
> 
> The table in the page I indicated shows options that can use even more
> memory, but that requires non standard C stuff like "far pointers" and I
> don't consider the result to be actually C since you can't move
> programer to and from machines like the VAX or 68000 without rewriting
> them.

  "Far" pointers exist for MS-DOS to support mixed memory-model programming,
where library A wants object larger than 64K while library B doesn't care
either way.  Yes it's a mess but that's pragmatism for you.

  But there's still code out there with such remnents, like zlib.  For
example:

ZEXTERN int ZEXPORT inflateBackInit OF((z_stream FAR *strm, int windowBits,
unsigned char FAR *window));

ZEXTERN, XEXPORT, OF and FAR exist to support different C compilers over the
ages.  And of those, XEXTERN and XEXPORT are for Windows, FAR for MS-DOS (see a
pattern here?) and OF for pre-ANSI C compilers.

  -spc



Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread jim stephens via cctalk



On 4/10/2017 5:45 PM, Chuck Guzis via cctalk wrote:

I suspect that at some point, Intel had
its big-system hopes pinned on the iA432 chipset.

https://en.wikipedia.org/wiki/Intel_iAPX_432

A friend made his career at Biin, mostly coding mind numbing code from 
specs, and a round of golf every morning.


The Siemens employees and Intel employees in the building were always in 
a state of terror with politics, but
the Biin employees had Intel retirement and tenure accumulating, and 
were free of any churn.


He retired early from Intel.

The wiki article mentions Merced, which I worked on doing an ICE product 
(third party company).  Loved the
stepping of the first processor, 8765309.  The slot @ a fab appeared so 
suddenly noone in the department
that prepped the package to make the first run had time to change it 
out.  It appeared @ a demo at an

inopportune moment, to much amusement.  But I digress.

thanks
Jim

https://en.wikipedia.org/wiki/Intel_iAPX_432



Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread Chuck Guzis via cctalk
On 04/10/2017 04:47 PM, Jecel Assumpcao Jr. via cctalk wrote:

> The 64KB segments in the 8086 were not a problem for Pascal (or
> Smalltalk, as shown by the Xerox PARC Notetaker computer) because each
> heap object and each proceedure can live in a different segment to take
> advantage of the whole memory. A single object (like an array) can't be
> split into multiple segments which limits us to small objects (this is
> not a problem for Smalltalk which can completely hide such splits as
> shown in the Mushroom computer).

Oh, heck, I pointed out the problems with segment-offset addressing
before the 8086 was released on silicon to Intel applications engineers.
   They didn't seem to think it was a big deal.  I even asked for an
"add to far" instruction that would get rid of the nasty bit
manipulation, but to no  avail.

But for tiny, small and compact memory models, the 8086 is just fine.

Speculatively, I don't think that the 8086 was initially considered as a
minicomputer-type processor, but more of an extension of the x80
architecture to serve the embedded world.   No privileged mode, strange
segment/offset addressing, etc.  I suspect that at some point, Intel had
its big-system hopes pinned on the iA432 chipset.

The fiction of automatic translation from 8080/85 code was just that.  I
recall that "Fast Eddie", our local sales rep made the mistake of
telling a tall one that Intel's assembly-language translator produced
smaller code that ran faster than the x80 version.  I called him on it
and gave him a sample program that ran under ISIS-II and told him to do
his best.  It was a floating-point package, with test data.   Straight
code; no fancy macros.   I probably still have the original code somewhere.

So, we went down to the local sales office in Santa Clara where there
was an MDS all set up, complete with hard drive and the 80-to-86
translator.  I cautioned the sales engineer that the code did a lot of
tricky stuff with flags, so any translation would have to be very accurate.

The program went in (it was about 10 AM) and the MDS just ground and
ground...  12:30 came around and we were treated to lunch while the
translator worked on the miserable 3000 lines of 8080 assembly.  Back
from lunch, nothing...   By 5 PM, we had gotten nowhere, so we said our
goodbyes and told him to contact us when they finally got it to work.

Two weeks passed and Fast Eddie called to say that they had finally got
the translation done (after some tweaking of the translator).  The
result was that the translated code was nearly twice as large as the x80
code and, while the program executed, it gave the wrong answers (we
didn't tell them what the answers should be as part of the test).

I was advocating the very new Motorola 68K chip and had even produced a
(wirewrap) prototype CPU board that ran enough code to get to the "Hello
world" stage.  Unfortunately, Bill Davidow was on our BOD and he said in
no uncertain terms that it would be a cold day in Hell if one of the
companies he shepherded used a competitor's CPU.   So we eventually
wound up using the 80186, and added a spot for the 80286 on the CPU board.

We got it all working, but it was a long slog because we were using
prerelease silicon and a bug could stop us dead for two weeks or more
while we awaited the next stepping.

--Chuck



Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread dwight via cctalk
I'd also like to have the board. I have a board with a NC4000 but never managed 
to get a RTX-2000.

One might say the NC4000 was the prototype for the RTX-2000.

The RTX-2000 could run applications several times faster than the same 
applications on a X86 machine of the time. They were often used as accelerators 
for different purposes. The ran two 16 bit busses at the same time and could 
execute as much as 4 operations in a single cycle.

Most instructions did a minimum of 2 operations in a single cycle.

Harris made the RTX-2000 in a rad hardened form so they were commonly used for 
satellites.

Having only a few gates for a processor meant they were more reliable for such 
applications

as well.

As an example. I wrote code for my NC4000, running at 2MHz that could sort 
1,000 integers in 19.1 ms, worst case.

I'm in silicon valley.

Dwight



From: cctalk <cctalk-boun...@classiccmp.org> on behalf of Dave via cctalk 
<cctalk@classiccmp.org>
Sent: Monday, April 10, 2017 11:41:34 AM
To: General Discussion: On-Topic and Off-Topic Posts
Subject: RTX-2000 processor PC/AT add-in card (any takers?)

I have a Harris RTX-2000 based system control board for a long defunct system.  
The board worked when removed more than 20 years ago in the mid 90's.  The 
RTX-2000 is a stack-based processor designed for running FORTH.  I think it was 
designed by Phil Koopman based on his graduate work.  The board is a 16-bit ISA 
board.  It was part of an MRI system that ran a version of MPE forth with a 
C-to-FORTH compiler (actually a C-like variant) that spits out a 16-bit FORTH 
variant with some embedded RTX-2000 code.

I also have another card with 3 channels of streaming 16-bit digital I/O, with 
special hardware to implement on-the-fly rotation matrices to the streaming 
output.
I have all the software and drivers as well. and I have written a c-based 
simulator that can run the FORTH/assembly emitted by the C-to-FORTH compiler 
(as well as the MRI libraries and hardware.)
If anyone wants to tinker with this hardware, or just pull the RTX-2000 chip, I 
would rather find a good home than toss the boards.
Dave


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread Cameron Kaiser via cctalk
> > Thanks for the list--I was aware of the various Java engines and the WD
> > P-code engine, but had never run into the SCAMP.
> 
> I just found an academic Pascal microprocessor from 1980 called EM-1 and
> described all the way to the chip layout level:
> 
> http://authors.library.caltech.edu/27046/1/TR_2883.pdf

My Venix DEC PRO 380 will run EM-1 binaries in an emulator (it looks like).

-- 
 personal: http://www.cameronkaiser.com/ --
  Cameron Kaiser * Floodgap Systems * www.floodgap.com * ckai...@floodgap.com
-- May I join your mind? -- Sarek, Star Trek III --


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread Jecel Assumpcao Jr. via cctalk
Chuck Guzis via cctalk wrote on Date: Mon, 10 Apr 2017 15:21:08 -0700
> Thanks for the list--I was aware of the various Java engines and the WD
> P-code engine, but had never run into the SCAMP.

I just found an academic Pascal microprocessor from 1980 called EM-1 and
described all the way to the chip layout level:

http://authors.library.caltech.edu/27046/1/TR_2883.pdf

> Okay, I've got to ask--exactly what made the 8086 unsuitable for C, but
> work with Pascal?  I'll admit to puzzlement about this statement.

I talked about the problem with far pointers and C in the post about the
iAPX 432.

The 64KB segments in the 8086 were not a problem for Pascal (or
Smalltalk, as shown by the Xerox PARC Notetaker computer) because each
heap object and each proceedure can live in a different segment to take
advantage of the whole memory. A single object (like an array) can't be
split into multiple segments which limits us to small objects (this is
not a problem for Smalltalk which can completely hide such splits as
shown in the Mushroom computer).

One big difference between Pascal and C is that while C seems to have
nested lexical scoped like Algol at first glance (and indeed it is often
list as being part of the "Algol family") it really doesn't. An object
either lives in the heap or is in the local stack frame. You can declare
new variables inside { ... } and they will shadow variables with the
same name declared outside of these brackets but this has no effect on
the runtime structures.

Pascal, on the other hand, allows proceedures to be declared inside
other proceedures and these nested scopes can access stuff declared in
the more external scopes. This requires some runtime structures that can
be awkward to implement in many processor architectures. An expression
like:

 x := y;

might generate quite a bit of code if "x" was declared one level up and
"y" was declared three levels up. But on the 8086 we could have pointers
to the frames of the various lexical levels saved at the start of the
current frame just like the "display registers" in the Burroughs B5000.
We could have something like:

 mov di,[bp-2*3] ; lexical level 3
 mov ax,[di-20] ; y
 mov di,[bp-2*1] ; lexical level 1
 mov [di-8],ax ; x

Filling up those pointers on each proceedure entry can take some time so
a popular alternative for when nested references were not too common was
to have a linked list of runtime frames and code like:

 mov di,[bp-2] ; lexical level 1
 mov di,[di-2] ; lexical level 2
 mov di,[di-2] ; lexical level 3
 mov ax,[di-20] ; y
 mov di,[bp-2] ; lexical level 1
 mov [di-8],ax ; x

Being able to directly address constant offsets from the base pointer
and offsets from the result of that greatly reduces the number of
instructions needed to support lexical scoping in Pascal. For C,
constant offsets from a pointer are great for getting at the elements of
structs, so this is a nice thing to have in any case and most RISCs
implement this.

-- Jecel
p.s.: I haven't programmed in x86 assembly since 1987 so don't trust the
above code fragments


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread Nigel Williams via cctalk
On Tue, Apr 11, 2017 at 7:49 AM, Jecel Assumpcao Jr. via cctalk
 wrote:
> About the original question, since the Burroughs architecture was
> eventually implemented as a microprocessor you can say that this was
> designed to run Algol:
>
> http://www.cpushack.com/2015/04/18/the-forgotten-ones-unisys-scamp-d-mainframe/

It is worth remembering the SCAMP MCM also supported the IBM 370 for
the Unisys System-80/7E platform (running OS/3).

However the SCAMP was implemented it could handle two quite different
machine architectures.


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread Chuck Guzis via cctalk
On 04/10/2017 02:49 PM, Jecel Assumpcao Jr. via cctalk wrote:


Thanks for the list--I was aware of the various Java engines and the WD
P-code engine, but had never run into the SCAMP.

> Some architectures that are considered general purpose have included
> features to support specific languages. The original 8086 was good at
> running Pascal, but pretty bad at C, for examle (this was fixed in the
> 386). The National 32016 tried to support Modula-2 (and Ada) which
> forced the 68020 to add matching features, which then were dropped from
> the 68030 as it became obvious that C had won the language wars of the
> 1980s.

Okay, I've got to ask--exactly what made the 8086 unsuitable for C, but
work with Pascal?  I'll admit to puzzlement about this statement.

--Chuck



The iAPX 432 and block languages (was Re: RTX-2000 processor PC/AT add-in card (any takers?))

2017-04-10 Thread Jecel Assumpcao Jr. via cctalk
Sean Conner via cctalk wrote on Mon, 10 Apr 2017 17:39:57 -0400
>   What about C made it difficult for the 432 to run?  
> 
>   -spc (Curious here, as some aspects of the 432 made their way to the 286
>   and we all know what happened to that architecture ... )

C expects memory addresses to look like integers and for it to be easy
to convert between the two. If your architecture uses a pair of numbers
or an even more complicated scheme then you won't be able to have a
proper C but only one or more less than satisfactory approximations.

The iAPX432 and 286 used logical segments. So there is no sequence of
increment or decrement operations that will get you from a byte in one
segment to a byte in another segment. For the 8086 that is sometimes
true but can be false if the "segments" (they should really be called
relocation registers instead) overlap.

Another feature of C is that it doesn't take types too seriously when
dealing with pointers. This means that a pointer to an integer array and
a pointer to a function can be mixed up in some ways. If an application
has been written like that then the best way to run it on an
architectures like these Intel ones is to set all segments to the same
memory region and never change them during execution. This is sometimes
called the "tiny memory model".

https://en.wikipedia.org/wiki/Intel_Memory_Model

Most applications keep function pointers separate from other kinds of
pointers and in this case you can set the code segment to a different
area than the data and stack for a total of 128KB of memory (compared to
just 64KB for the tiny memory model).

The table in the page I indicated shows options that can use even more
memory, but that requires non standard C stuff like "far pointers" and I
don't consider the result to be actually C since you can't move
programer to and from machines like the VAX or 68000 without rewriting
them.

-- Jecel

p.s.: sorry about the word wrapped link in my other post. It should have
been:
> http://www.cpushack.com/2015/04/18/the-forgotten-ones-unisys-scamp-d-mainframe/


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread Jecel Assumpcao Jr. via cctalk
Bill Gunshannon via cctalk wrote on Mon, 10 Apr 2017 20:59:40 +
> On 4/10/2017 4:42 PM, Chuck Guzis via cctalk wrote:
> > Were there any microprocessor chips that attempted to mimic the
> > Burroughs B5000 series and natively execute Algol of any flavor?
> 
> No, but Western Digital implemented the UCSD P-machine in hardware
> selling it as the Pascal Microengine.  I always wanted on of those
> but I fear few have survived the scrap yard.

This shared most chips with the DEC LSI-11 but used different microcode.

I created this list for one of my talks:

Historical Language Specific Architectures:

- Algol: English Electric KDF9, Burroughs B5000

- APL: Philip Abrams' machine

- Pascal: Western Digital Microengine

- Modula-2: Lilith

- extended Ada: Intel iAPX432

- Lisp: Symbolics, Lisp Machine Inc., Texas Instruments, Xerox D
Machines

- Forth: Novix, Harris RTX-2000, MISC MC17, WISC CPU/16, SC32, MuP21,
MSL16, Ignite, i21, F21, E16, MARC4, QSP16, TF2216, Steamer16,
MicroCore,J1, SC20, F18 GA144

- Java: picoJava, aj102, Cjip, Komodo, FemtoJava, ARM Jazelle,JOP, SHAP,
MAJIC

- Smalltalk: Xerox D Machines, Katana32, Swamp, AI32, SOAR, COM,
Rekursive, Mushroom, J-Machine

I expect this list is not complete. Note that I don't include computers
created for a specific language using a conventional processor, like the
APL computers MCM/70, IBM 5100 and Ampere WS-1.

Some architectures that are considered general purpose have included
features to support specific languages. The original 8086 was good at
running Pascal, but pretty bad at C, for examle (this was fixed in the
386). The National 32016 tried to support Modula-2 (and Ada) which
forced the 68020 to add matching features, which then were dropped from
the 68030 as it became obvious that C had won the language wars of the
1980s.

About the original question, since the Burroughs architecture was
eventually implemented as a microprocessor you can say that this was
designed to run Algol:

http://www.cpushack.com/2015/04/18/the-forgotten-ones-unisys-scamp-d-mai
nframe/

-- Jecel


The iAPX 432 and block languages (was Re: RTX-2000 processor PC/AT add-in card (any takers?))

2017-04-10 Thread Sean Conner via cctalk
It was thus said that the Great Eric Smith via cctalk once stated:
> 
> The Intel iAPX 432 was also designed to explicitly support block-structured
> languages. The main language Intel pushed was Ada, but there was no
> technical reason it couldn't have supported Algol, Pascal, Modula, Euclid,
> Mesa, etc. just as well. (Or just as poorly, depending on your point of
> view.)
> 
> The iAPX 432 could not have supported standard C, though, except in the
> sense that since the 432 GDP was Turing-complete, code running on it could
> provide an emulated environment suitable for standard C.

  What about C made it difficult for the 432 to run?  

  -spc (Curious here, as some aspects of the 432 made their way to the 286
and we all know what happened to that architecture ... )


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread Eric Smith via cctalk
On Apr 10, 2017 2:43 PM, "Chuck Guzis via cctalk" 
wrote:
> Were there any microprocessor chips that attempted to mimic the
> Burroughs B5000 series and natively execute Algol of any flavor?

Yes, that's what the HP 3000 did (before PA RISC), and they did make
microprocessor implementations of it.

The Intel iAPX 432 was also designed to explicitly support block-structured
languages. The main language Intel pushed was Ada, but there was no
technical reason it couldn't have supported Algol, Pascal, Modula, Euclid,
Mesa, etc. just as well. (Or just as poorly, depending on your point of
view.)

The iAPX 432 could not have supported standard C, though, except in the
sense that since the 432 GDP was Turing-complete, code running on it could
provide an emulated environment suitable for standard C.

When the 432 project (originally 8800) started, there weren't many people
predicting that C (and its derivatives) would take over the world.


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread Bill Gunshannon via cctalk


On 4/10/2017 4:42 PM, Chuck Guzis via cctalk wrote:
> On 04/10/2017 11:41 AM, Dave via cctalk wrote:
>> I have a Harris RTX-2000 based system control board for a long
>> defunct system.  The board worked when removed more than 20 years ago
>> in the mid 90's.  The RTX-2000 is a stack-based processor designed
>> for running FORTH.  I think it was designed by Phil Koopman based on
>> his graduate work.  The board is a 16-bit ISA board.  It was part of
>> an MRI system that ran a version of MPE forth with a C-to-FORTH
>> compiler (actually a C-like variant) that spits out a 16-bit FORTH
>> variant with some embedded RTX-2000 code.
>
> That is a bit of a surprise--in my experience it takes very little code
> to support Forth on any processor--that someone would build a dedicated
> chip for it is unusual.
>
> Were there any microprocessor chips that attempted to mimic the
> Burroughs B5000 series and natively execute Algol of any flavor?

No, but Western Digital implemented the UCSD P-machine in hardware
selling it as the Pascal Microengine.  I always wanted on of those
but I fear few have survived the scrap yard.

bill



Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread Chuck Guzis via cctalk
On 04/10/2017 11:41 AM, Dave via cctalk wrote:
> I have a Harris RTX-2000 based system control board for a long
> defunct system.  The board worked when removed more than 20 years ago
> in the mid 90's.  The RTX-2000 is a stack-based processor designed
> for running FORTH.  I think it was designed by Phil Koopman based on
> his graduate work.  The board is a 16-bit ISA board.  It was part of
> an MRI system that ran a version of MPE forth with a C-to-FORTH
> compiler (actually a C-like variant) that spits out a 16-bit FORTH
> variant with some embedded RTX-2000 code.

That is a bit of a surprise--in my experience it takes very little code
to support Forth on any processor--that someone would build a dedicated
chip for it is unusual.

Were there any microprocessor chips that attempted to mimic the
Burroughs B5000 series and natively execute Algol of any flavor?

--Chuck



Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread Charles Anthony via cctalk
On Mon, Apr 10, 2017 at 11:41 AM, Dave via cctalk 
wrote:

> I have a Harris RTX-2000 based system control board for a long defunct
> system.  The board worked when removed more than 20 years ago in the mid
> 90's.  The RTX-2000 is a stack-based processor designed for running FORTH.
> I think it was designed by Phil Koopman based on his graduate work.  The
> board is a 16-bit ISA board.  It was part of an MRI system that ran a
> version of MPE forth with a C-to-FORTH compiler (actually a C-like variant)
> that spits out a 16-bit FORTH variant with some embedded RTX-2000 code.
>
> I also have another card with 3 channels of streaming 16-bit digital I/O,
> with special hardware to implement on-the-fly rotation matrices to the
> streaming output.
> I have all the software and drivers as well. and I have written a c-based
> simulator that can run the FORTH/assembly emitted by the C-to-FORTH
> compiler (as well as the MRI libraries and hardware.)
> If anyone wants to tinker with this hardware, or just pull the RTX-2000
> chip, I would rather find a good home than toss the boards.
> Dave
>


I am interested; I was a Forth programmer back in the day, but never had a
chance to work with Forth processors.

I am in the Seattle area.

-- Charles


RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread Dave via cctalk
I have a Harris RTX-2000 based system control board for a long defunct system.  
The board worked when removed more than 20 years ago in the mid 90's.  The 
RTX-2000 is a stack-based processor designed for running FORTH.  I think it was 
designed by Phil Koopman based on his graduate work.  The board is a 16-bit ISA 
board.  It was part of an MRI system that ran a version of MPE forth with a 
C-to-FORTH compiler (actually a C-like variant) that spits out a 16-bit FORTH 
variant with some embedded RTX-2000 code.

I also have another card with 3 channels of streaming 16-bit digital I/O, with 
special hardware to implement on-the-fly rotation matrices to the streaming 
output.
I have all the software and drivers as well. and I have written a c-based 
simulator that can run the FORTH/assembly emitted by the C-to-FORTH compiler 
(as well as the MRI libraries and hardware.)
If anyone wants to tinker with this hardware, or just pull the RTX-2000 chip, I 
would rather find a good home than toss the boards.
Dave