Re: LISP implementations on small machines

2019-10-03 Thread Phil Budne via cctalk
Noel wrote:
> The PDP-6 and KA10 (basically a re-implementation of the PDP-6 architecture)
> both had cheapo versions where addresses 0-15 were in main memory, but also
> had an option for real registers, e.g. in the PDP-6: "The Type 162 Fast
> Memory Module contains 16 words with a 0.4 usecond cycle." The KA10 has
> a similar "fast memory option".

http://www.bitsavers.org/www.computer.museum.uq.edu.au/pdf/DEC-10-HMAA-D%20PDP-10%20KA10%20Central%20Processor%20Maintenance%20Manual%20Volume%20I.pdf
p 1-1 (pdf pg 13) says the CPU options were:

KE10  Extended Order Code (byte instructions)
KT10  Memory Protection and Relocation
KT10A Double Memory Protection and Relocation
KM10  Fast Registers

Elsewhere I have the note the the fast registers had 0.21 microsecond
access time.


Re: LISP implementations on small machines

2019-10-03 Thread Guy Sotomayor Jr via cctalk



> On Oct 3, 2019, at 10:26 AM, Paul Koning via cctalk  
> wrote:
> 
> 
> 
>> On Oct 3, 2019, at 12:39 PM, Chuck Guzis via cctalk  
>> wrote:
>> 
>> On 10/3/19 9:01 AM, Noel Chiappa via cctalk wrote:
>> 
>>> The PDP-6 and KA10 (basically a re-implementation of the PDP-6 architecture)
>>> both had cheapo versions where addresses 0-15 were in main memory, but also
>>> had an option for real registers, e.g. in the PDP-6: "The Type 162 Fast
>>> Memory Module contains 16 words with a 0.4 usecond cycle." The KA10 has
>>> a similar "fast memory option".
>> 
>> A bit more contemporary example might be the low-end PIC
>> microcontrollers (e.g. the 12F series).   Harvard architecture (14 bit
>> instructions, 8 bit data), but data is variously described as
>> "registers" (when used an instruction operand) or "memory" when
>> addressed indirectly.   That is, the 64 bytes of SRAM can be referred to
>> as either a memory location or as a register operand.
> 
> Then again, the PDP-10 has that "two ways to refer to it" as well.  In that 
> case, you do have dedicated register logic, and what happens is that memory 
> addresses 0-15 are instead redirected to the register array.  The same 
> applies to the EL-X8.  The way you can address things doesn't necessarily 
> tell you what sort of storage mechanism is used for it.
> 

So does the PDP-11.  The 8 registers are mapped to the top 8 words of memory so 
you can do some quite interesting things.  It is also possible to run a (small) 
program in only the registers (e.g. no memory at all).

TTFN - Guy



Re: Tandem Minicomputers

2019-10-03 Thread Kevin Monceaux via cctalk
Jason,

On Sun, Sep 29, 2019 at 11:46:03PM -0500, Jason T via cctalk wrote:
 
> There's such as thing as "so obscure that no one knows/cares about
> it".  I've had those before.  Do I have another?  It sure is heavy.

I've been curious about Tandem systems for years.  I had the opportunity to
use a Tandem system briefly a couple of decades ago as an end user in
freight bill entry and driver check in.  I remember very little about the
system, other than the exit key from most, if not all, screens was
F16.



-- 

Kevin
http://www.RawFedDogs.net
http://www.Lassie.xyz
http://www.WacoAgilityGroup.org
Bruceville, TX

What's the definition of a legacy system? One that works!
Errare humanum est, ignoscere caninum.


Re: LISP implementations on small machines

2019-10-03 Thread Paul Koning via cctalk



> On Oct 3, 2019, at 12:39 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 10/3/19 9:01 AM, Noel Chiappa via cctalk wrote:
> 
>> The PDP-6 and KA10 (basically a re-implementation of the PDP-6 architecture)
>> both had cheapo versions where addresses 0-15 were in main memory, but also
>> had an option for real registers, e.g. in the PDP-6: "The Type 162 Fast
>> Memory Module contains 16 words with a 0.4 usecond cycle." The KA10 has
>> a similar "fast memory option".
> 
> A bit more contemporary example might be the low-end PIC
> microcontrollers (e.g. the 12F series).   Harvard architecture (14 bit
> instructions, 8 bit data), but data is variously described as
> "registers" (when used an instruction operand) or "memory" when
> addressed indirectly.   That is, the 64 bytes of SRAM can be referred to
> as either a memory location or as a register operand.

Then again, the PDP-10 has that "two ways to refer to it" as well.  In that 
case, you do have dedicated register logic, and what happens is that memory 
addresses 0-15 are instead redirected to the register array.  The same applies 
to the EL-X8.  The way you can address things doesn't necessarily tell you what 
sort of storage mechanism is used for it.

paul




Re: LISP implementations on small machines

2019-10-03 Thread ben via cctalk

On 10/3/2019 10:01 AM, Noel Chiappa via cctalk wrote:

 > From: Paul Koning

 > Some early machines, the PDP-6 I believe is an example, have
 > "registers" in the ISA but they actually correspond to specific parts
 > of main memory.

The PDP-6 and KA10 (basically a re-implementation of the PDP-6 architecture)
both had cheapo versions where addresses 0-15 were in main memory, but also
had an option for real registers, e.g. in the PDP-6: "The Type 162 Fast
Memory Module contains 16 words with a 0.4 usecond cycle." The KA10 has
a similar "fast memory option".

Noel

The PDP-5 how soon theyy forget. The PC was in core memory for sure.
Not sure about the AC how ever.
The IBM 1130 had the index registers in Core at something like  1,2,3.

Today Fast memory is .4 ps. I wonder how the old machines
would compare with today's wonder CPU's assuming the same transistor
speeds.Ben.



Re: LISP implementations on small machines

2019-10-03 Thread Chuck Guzis via cctalk
On 10/3/19 9:01 AM, Noel Chiappa via cctalk wrote:

> The PDP-6 and KA10 (basically a re-implementation of the PDP-6 architecture)
> both had cheapo versions where addresses 0-15 were in main memory, but also
> had an option for real registers, e.g. in the PDP-6: "The Type 162 Fast
> Memory Module contains 16 words with a 0.4 usecond cycle." The KA10 has
> a similar "fast memory option".

A bit more contemporary example might be the low-end PIC
microcontrollers (e.g. the 12F series).   Harvard architecture (14 bit
instructions, 8 bit data), but data is variously described as
"registers" (when used an instruction operand) or "memory" when
addressed indirectly.   That is, the 64 bytes of SRAM can be referred to
as either a memory location or as a register operand.

This isn't unusual--the original STAR-100 had 256 64-bit registers that
could be addressed as register storage or as the first 256 words of
memory.  Eventually, this capability was disabled (the so-called "Rev.
R" ECO) because of conflicts in usage.

--Chuck



Re: SGI Challenge M memory boards

2019-10-03 Thread Ethan O'Toole via cctalk
OK, it was some time ago, it well could be a Challenge L.  it was a big box, 
and was powered by a big 28 V power supply.  (Or, was it a 48 V supply?)

Jon


It should be 48v. A 9uish VME style card cage. The Challenge XL / Full 
rack Onyx and the Onyx / Challenge L should all share the same memory.


All those machines are pretty cool.


--
: Ethan O'Toole




Re: LISP implementations on small machines

2019-10-03 Thread Noel Chiappa via cctalk
> From: Paul Koning

> Some early machines, the PDP-6 I believe is an example, have
> "registers" in the ISA but they actually correspond to specific parts
> of main memory.

The PDP-6 and KA10 (basically a re-implementation of the PDP-6 architecture)
both had cheapo versions where addresses 0-15 were in main memory, but also
had an option for real registers, e.g. in the PDP-6: "The Type 162 Fast
Memory Module contains 16 words with a 0.4 usecond cycle." The KA10 has
a similar "fast memory option".

Noel


RE: VAX + Spectre

2019-10-03 Thread Dave Wade via cctalk



> -Original Message-
> From: cctalk  On Behalf Of Paul Koning via
> cctalk
> Sent: 03 October 2019 16:28
> To: Stefan Skoglund 
> Cc: General Discussion: On-Topic and Off-Topic Posts

> Subject: Re: VAX + Spectre
> 
> 
> 
> > On Oct 3, 2019, at 10:55 AM, Stefan Skoglund 
> wrote:
> >
> > tor 2019-10-03 klockan 09:45 -0400 skrev Paul Koning via cctalk:
> >>> On Oct 3, 2019, at 8:25 AM, Maciej W. Rozycki   wrote:
> >>>
> >>> On Thu, 3 Oct 2019, Maciej W. Rozycki wrote:
> >>>
> > You need an extremely high resolution timer to detect slight
> > differences in execution time of speculatively-executed threads.
> > The VAX
> > 11/780 certainly did
> > not do speculative execution, and my guess is that all VAXen did
> > not, either.
> 
>  The NVAX and NVAX+ implementations include a branch predictor in
>  their microarchitecture[1], so obviously they do execute
>  speculatively.
> >>>
> >>> For the record: in NVAX prediction does not extend beyond the
> >>> instruction fetch unit (I-box in VAX-speak), so there's actually no
> >>> speculative execution, but only speculative prefetch.
> >>
> >> That's a key point.  These vulnerabilities are quite complex and
> >> details matter.  They depend on speculation that goes far enough to
> >> make data references that produce cache fills, and that those fills
> >> persist after the speculative references have been voided.
> >>
> >> Branch prediction is only the first step, and as you point out, that
> >> alone is nowhere near enough.  For example, if a particular design
> >> did speculative execution but not speculative memory references on
> >> adresses that miss in the cache, you'd still have no issue.
> >>
> >
> > Can the speculative pre-fetch of instruction trigger cache fills ?
> 
> I don't know, but that isn't relevant to the Spectre issue.  That one need
> speculative data loads, visible via a timing channel to user mode code.
> 
>   paul

Whilst , of course, nothing has been done for VAX  VSI have checked to see
if OpenVMS in Alpha, Itanium or AMD64 are susceptible...

http://vmssoftware.com/pdfs/news/Customer_Letter_2018_Meltdown_Spectre.pdf

Dave



Re: VAX + Spectre

2019-10-03 Thread Paul Koning via cctalk



> On Oct 3, 2019, at 10:55 AM, Stefan Skoglund  wrote:
> 
> tor 2019-10-03 klockan 09:45 -0400 skrev Paul Koning via cctalk:
>>> On Oct 3, 2019, at 8:25 AM, Maciej W. Rozycki >>> wrote:
>>> 
>>> On Thu, 3 Oct 2019, Maciej W. Rozycki wrote:
>>> 
> You need an extremely high resolution timer to detect slight
> differences in
> execution time of speculatively-executed threads. The VAX
> 11/780 certainly did
> not do speculative execution, and my guess is that all VAXen
> did not, either.
 
 The NVAX and NVAX+ implementations include a branch predictor in
 their 
 microarchitecture[1], so obviously they do execute speculatively.
>>> 
>>> For the record: in NVAX prediction does not extend beyond the
>>> instruction 
>>> fetch unit (I-box in VAX-speak), so there's actually no
>>> speculative 
>>> execution, but only speculative prefetch.
>> 
>> That's a key point.  These vulnerabilities are quite complex and
>> details matter.  They depend on speculation that goes far enough to
>> make data references that produce cache fills, and that those fills
>> persist after the speculative references have been voided.
>> 
>> Branch prediction is only the first step, and as you point out, that
>> alone is nowhere near enough.  For example, if a particular design
>> did speculative execution but not speculative memory references on
>> adresses that miss in the cache, you'd still have no issue.
>> 
> 
> Can the speculative pre-fetch of instruction trigger cache fills ?

I don't know, but that isn't relevant to the Spectre issue.  That one need 
speculative data loads, visible via a timing channel to user mode code.

paul



Re: VAX + Spectre

2019-10-03 Thread Stefan Skoglund via cctalk
tor 2019-10-03 klockan 09:45 -0400 skrev Paul Koning via cctalk:
> > On Oct 3, 2019, at 8:25 AM, Maciej W. Rozycki  > > wrote:
> > 
> > On Thu, 3 Oct 2019, Maciej W. Rozycki wrote:
> > 
> > > > You need an extremely high resolution timer to detect slight
> > > > differences in
> > > > execution time of speculatively-executed threads. The VAX
> > > > 11/780 certainly did
> > > > not do speculative execution, and my guess is that all VAXen
> > > > did not, either.
> > > 
> > > The NVAX and NVAX+ implementations include a branch predictor in
> > > their 
> > > microarchitecture[1], so obviously they do execute speculatively.
> > 
> > For the record: in NVAX prediction does not extend beyond the
> > instruction 
> > fetch unit (I-box in VAX-speak), so there's actually no
> > speculative 
> > execution, but only speculative prefetch.
> 
> That's a key point.  These vulnerabilities are quite complex and
> details matter.  They depend on speculation that goes far enough to
> make data references that produce cache fills, and that those fills
> persist after the speculative references have been voided.
> 
> Branch prediction is only the first step, and as you point out, that
> alone is nowhere near enough.  For example, if a particular design
> did speculative execution but not speculative memory references on
> adresses that miss in the cache, you'd still have no issue.
> 

Can the speculative pre-fetch of instruction trigger cache fills ?



Re: LISP implementations on small machines

2019-10-03 Thread Stefan Skoglund via cctalk
ons 2019-10-02 klockan 19:02 + skrev Rich Alderson via cctalk:
> From: Mark Kahrs
> Sent: Tuesday, October 01, 2019 7:24 PM
> 
> > The first implementation was done for the 7090 by McCarthy (hence
> > CAR and
> > CDR --- Contents of Address Register and Contents of Decrement
> > Register).
> 
> In the 70x series of IBM scientific systems (704, 709, 7040, 7090,
> 7044, 7094),
> the word "register" referred to memory locations rather than to the
> accumulator
> or multiplier/quotient.  Each memory register was 36 bits long, and
> could be
> treated as 4 fields: A 15 bit address, a 15 bit decrement, a 3 bit
> tag, and a
> 3 bit index selector.
> 
> In the earliest implementation of LISP, there were 4 functions which
> returned
> the different parts of a register: CAR, CDR, CTR, and CIR.  These
> were
> abbreviations for "Contents of the {Address, Decrement, Tag, Index}
> PART OF THE
> Register", not "Contents of the {Address, Decrement} Register" as is
> so often
> misstated.
> 
> Rich
> 
> NB: Information from a talk given on the history of Lisp by Herbert
> Stoyan at
> the 1984 ACM Conference on Lisp and Functional Programming Languages,
> and later
> verified by personal inspection of the code.

It seems that simh has prebuilt ibm 70xx machines with lisp installed.
https://simh.trailing-edge.narkive.com/WiVs5570/release-of-a-set-of-simulators-for-ibm-7000-series-mainframes


Re: LISP implementations on small machines

2019-10-03 Thread Paul Koning via cctalk



> On Oct 2, 2019, at 3:02 PM, Rich Alderson via cctalk  
> wrote:
> 
> From: Mark Kahrs
> Sent: Tuesday, October 01, 2019 7:24 PM
> 
>> The first implementation was done for the 7090 by McCarthy (hence CAR and
>> CDR --- Contents of Address Register and Contents of Decrement Register).
> 
> In the 70x series of IBM scientific systems (704, 709, 7040, 7090, 7044, 
> 7094),
> the word "register" referred to memory locations rather than to the 
> accumulator
> or multiplier/quotient.  Each memory register was 36 bits long, and could be
> treated as 4 fields: A 15 bit address, a 15 bit decrement, a 3 bit tag, and a
> 3 bit index selector.

While we now think of "register" as a specific bit of hardware distinct from 
memory, that isn't necessary.  The term makes perfect sense as a small set of 
storage elements that are treated differently than main memory in the 
instruction set.  For example, the IBM 1620 has no registers (the ISA only 
references main memory).  Some early machines, the PDP-6 I believe is an 
example, have "registers" in the ISA but they actually correspond to specific 
parts of main memory.  Ditto the Philips PR-8000, which has 8 sets of 8 
registers (one set for each interrupt priority level) actually implemented in 
locations 0-63 of main memory.

In a 1948 computer architecture course. Adriaan van Wijngaarden referred to 
"fast memory" for what we now call registers; that document in effect is an 
early discussion of memory hierarchy.

paul



Re: VAX + Spectre

2019-10-03 Thread Paul Koning via cctalk



> On Oct 3, 2019, at 8:25 AM, Maciej W. Rozycki  wrote:
> 
> On Thu, 3 Oct 2019, Maciej W. Rozycki wrote:
> 
>>> You need an extremely high resolution timer to detect slight differences in
>>> execution time of speculatively-executed threads. The VAX 11/780 certainly 
>>> did
>>> not do speculative execution, and my guess is that all VAXen did not, 
>>> either.
>> 
>> The NVAX and NVAX+ implementations include a branch predictor in their 
>> microarchitecture[1], so obviously they do execute speculatively.
> 
> For the record: in NVAX prediction does not extend beyond the instruction 
> fetch unit (I-box in VAX-speak), so there's actually no speculative 
> execution, but only speculative prefetch.

That's a key point.  These vulnerabilities are quite complex and details 
matter.  They depend on speculation that goes far enough to make data 
references that produce cache fills, and that those fills persist after the 
speculative references have been voided.

Branch prediction is only the first step, and as you point out, that alone is 
nowhere near enough.  For example, if a particular design did speculative 
execution but not speculative memory references on adresses that miss in the 
cache, you'd still have no issue.

paul




Re: LISP implementations on small machines

2019-10-03 Thread David via cctalk
Thanks for that bit of historical information. Things always make more sense in 
context. When I learned lisp on a B6700 it was hard to understand and harder to 
program. With this bit of context lisp now makes a lot more sense, and looking 
back if I knew this then I’m sure I would have grasped the language much more 
quickly.

David

> On Oct 2, 2019, at 12:02 PM, Rich Alderson via cctech  
> wrote:
> 
> From: Mark Kahrs
> Sent: Tuesday, October 01, 2019 7:24 PM
> 
>> The first implementation was done for the 7090 by McCarthy (hence CAR and
>> CDR --- Contents of Address Register and Contents of Decrement Register).
> 
> In the 70x series of IBM scientific systems (704, 709, 7040, 7090, 7044, 
> 7094),
> the word "register" referred to memory locations rather than to the 
> accumulator
> or multiplier/quotient.  Each memory register was 36 bits long, and could be
> treated as 4 fields: A 15 bit address, a 15 bit decrement, a 3 bit tag, and a
> 3 bit index selector.
> 
> In the earliest implementation of LISP, there were 4 functions which returned
> the different parts of a register: CAR, CDR, CTR, and CIR.  These were
> abbreviations for "Contents of the {Address, Decrement, Tag, Index} PART OF 
> THE
> Register", not "Contents of the {Address, Decrement} Register" as is so often
> misstated.
> 
>Rich
> 
> NB: Information from a talk given on the history of Lisp by Herbert Stoyan at
> the 1984 ACM Conference on Lisp and Functional Programming Languages, and 
> later
> verified by personal inspection of the code.



Re: Fwd: VAX + Spectre

2019-10-03 Thread Maciej W. Rozycki via cctalk
On Thu, 3 Oct 2019, Maciej W. Rozycki wrote:

> > You need an extremely high resolution timer to detect slight differences in
> > execution time of speculatively-executed threads. The VAX 11/780 certainly 
> > did
> > not do speculative execution, and my guess is that all VAXen did not, 
> > either.
> 
>  The NVAX and NVAX+ implementations include a branch predictor in their 
> microarchitecture[1], so obviously they do execute speculatively.

 For the record: in NVAX prediction does not extend beyond the instruction 
fetch unit (I-box in VAX-speak), so there's actually no speculative 
execution, but only speculative prefetch.

  Maciej


Re: Computer Automation Naked Mini circuit boards

2019-10-03 Thread Peter Cetinski via cctalk
I’d like to create a Tandy 150 replica one day since there are no known 
examples in existence.  It was based on the Naked Mini-4 system.  These boards 
seem to be an earlier CA product but I’m not sure.  Anyone here know for 
certain?

http://www.vcfed.org/forum/showthread.php?66885-The-Rarest-Tandy-Computer-of-them-All-The-Tandy-150/page2

Pete

> On Oct 3, 2019, at 5:20 AM, Pontus Pihlgren via cctalk 
>  wrote:
> 
> I have a Naked Mini, where are you located?
> 
> I couldn't see your images.. not sure if my vcfed account is still good. 
> So I don't know what you have.
> 
> /P
> 
>> On Wed, Oct 02, 2019 at 09:31:09AM +, Roland via cctech wrote:
>> Hello,
>> I was wondering if anyone has a Computer Automation Naked Mini.
>> I have these boards and I have no clue what to do with it. So if anyoneis 
>> interested please let me know. Pictures are in this vcfed topic:
>> http://www.vcfed.org/forum/showthread.php?68302-Computer-Automation-Naked-Mini-circuit-boardsAlso
>>  interested in swap with omnibus material...
>> 
>> Regards, Roland


Re: Computer Automation Naked Mini circuit boards

2019-10-03 Thread Pontus Pihlgren via cctalk
I have a Naked Mini, where are you located?

I couldn't see your images.. not sure if my vcfed account is still good. 
So I don't know what you have.

/P

On Wed, Oct 02, 2019 at 09:31:09AM +, Roland via cctech wrote:
> Hello,
> I was wondering if anyone has a Computer Automation Naked Mini.
> I have these boards and I have no clue what to do with it. So if anyoneis 
> interested please let me know. Pictures are in this vcfed topic:
> http://www.vcfed.org/forum/showthread.php?68302-Computer-Automation-Naked-Mini-circuit-boardsAlso
>  interested in swap with omnibus material...
> 
> Regards, Roland