Re: [Simh] pdp11 - console input with high bit set

2020-07-25 Thread Timothe Litt

On 25-Jul-20 13:47, Paul Koning wrote:
>>  But by the mid to late 70s, i.e. with the glass TTY it started to fall from 
>> favor.   I don't know why, but I would suspect this was because dedicated 
>> lines started to supplant telephone circuit-based connections and single-bit 
>> error detect was not useful.  It did not happen that often.
> It could be that glass TTYs were computer peripherals, and typically close to 
> the computer or connected by a modem that was pretty clean.  The older 
> devices tended to be on current loops, possibly quite long ones with 
> debatable signal quality.

How often you got parity errors was a function of modem generation and
line quality - acoustic couplers from your house in the country were
good for frequent parity - and mult-bit - errors.

By the 80s modems got much smarter.  While the 103 was pure bit-by-bit
FSK, in addition to coding improvements, later modem protocols (e.g.USR
HST in 186, then V.42) provide error correction (including
retransmission).   In between, V.34 does a lot of work in the initial
handshake to adapt to line characteristics.

So the error rate became essentially zero (though latency could be
unbounded).

Except in some weird cases (I won't mention which computer rooms had
modems several hundred feet from the computer's interfaces...), the
modem - host and modem - TTY would be within the RS232 limit of 25 ft.

OTOTH, parity in hardware was cheap - though software often didn't
handle parity errors very effectively.  Software used (IBM) or moved to
8-bit characters as I8n came along.  So unless parity was used in HW, or
you had a non-byte architecture (e.g. the PDP-10 which treats "bytes"
between 1 and 36 bits equally), it was inconvenient.

In any case, while the short modem - DTE interface was still a
vulnerability, once you have an error-corrected path, "parity was for
farmers."

Current loops - especially when optically coupled - were actually quite
good.  Extending RS232 beyond the 25 ft spec could get problematic. 
Yes, 3,000 ft was quite possible.  But sensitive to environment.  I had
a number of notable cases where customers complained about long line
(RS233) issues - and switching to current loop modems (typically > 20ma)
resolved them.  Hint: you don't want to run long RS232 lines around
elevator machine rooms, or industrial factory floors, or ...

Like anything, it helps to read, understand, and conform to the
specifications...


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] pdp11 - console input with high bit set

2020-07-24 Thread Timothe Litt
Actually, even parity was more common in the early daze of DEC async. 
MARK always sets the high bit - even sets it only to make the total
number of 1s even.

Quick test: Given that #215 is CR - If the code is looking for #212 for
LF, it's mark.  If it's looking for #012, it's even.

Note also that the digits can also be used - e.g. '0' => 060 - is even,
while 260 would be Mark (or Odd).

Generating the expected format is a function of the terminal emulator.

On 24-Jul-20 04:37, Johnny Billquist wrote:
> You need to have your terminal set to MARK parity.
>
>   Johnny
>
> On 2020-07-24 01:56, Paul Moore wrote:
>> I am trying to run an RK11 diagnostic and am stuck.
>>
>> The diagnostic asks the user how many drives to test and I can get
>> the input to work
>>
>> Looking at the code, it is looking for digits and then cr.
>>
>> But it is actually looking for  #215, which is 0x8d. Which is CR with
>> the high bit set. (It also looks for #377 del with HB set)
>>
>> So what happens is that it just keeps reprompting
>>
>> I don’t see how that character ever gets into the system. I did ‘set
>> tti 8b’ but it made no difference. I can post the relevant code if
>> needed.
>>
>>
>> ___
>> Simh mailing list
>> Simh@trailing-edge.com
>> http://mailman.trailing-edge.com/mailman/listinfo/simh
>>
>
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] FW: pdp 11 timing -->anf10 workstation on pdp11 with throttling

2020-07-20 Thread Timothe Litt
> Well, then the first question that needs to be answered, which model
> of PDP-11 was that code expected to run on

ANF-10 primarily runs on the 11/40 (well, and the PDP-10s).  Exceptions:
DN200 remote station: 11/34, DN22: 11/04 .

CHK11 compiles accordingly.

On 20-Jul-20 18:19, Johnny Billquist wrote:
> Well, then the first question that needs to be answered, which model
> of PDP-11 was that code expected to run on, because the results will
> differ depending on that. Also, what kind of memory? (I would guess
> some old, small core memory boards.)
> The PDP-11 execution speed really does vary based on many factors, on
> real hardware...
>
>   Johnny
>
> On 2020-07-21 00:13, s...@swabhawat.com wrote:
>>
>>
>> L.S.
>>
>> Actually where this is important, is when using Pdp11 based ANF10
>> workstations in the Tops10 realm.
>>
>> When starting up, the Anf10 software on the pdp11 sim test various
>> devices for functionality thereby using instruction count based loops
>> etc.
>> When all the devices necessary (paper tape reader/punch, incremental
>> plotter interface, DZ and DH multiplexors, DMS and DUP/KDP devices
>> and DL11 interfaces) are properly verified, it cranks up the
>> communication configuration with  scanning the network for active
>> Pdp10 Tops10 host systems.
>> The throttling of the pdp11 should be carefully selected to let this
>> function.
>>
>>
>> Reindert
>>
>>
>> -Original Message-
>> From: Simh [mailto:simh-boun...@trailing-edge.com] On Behalf Of
>> Johnny Billquist
>> Sent: Monday, 20 July, 2020 23:20
>> To: Paul Moore ; simh@trailing-edge.com
>> Subject: Re: [Simh] pdp 11 timing
>>
>> Instruction timing as such is not relevant. Different implementations
>> had very different timings, not to mention that speed of memory also
>> makes a difference.
>>
>> Devices basically do not have a strict timing either, but yes, there
>> is plenty of software that assumes that an interrupt does not happen
>> before a single instruction have been executed after the previous
>> interrupt, from the same device, for example.
>> On real hardware that was just an absurd case that lots of code never
>> considered, since it wasn't really physically possible for it to happen.
>>
>> The throttling in simh is because some people want the emulation to
>> somewhat mimic the real thing. For some people, that experience of
>> slowness is desirable.
>>
>>     Johnny
>>
>> On 2020-07-20 23:10, Paul Moore wrote:
>>> (I am writing my own emulator just because I have never done that
>>> before, and the PDP 11 is such a pivotal system in the history of
>>> modern computing it seemed worth learning about, and what better way
>>> to learn than to emulate it )
>>>
>>> So how important is timing of instruction execution and device
>>> response?
>>>
>>> The PDP 11 docs go  to great length giving instruction timing. But the
>>> fact that there is a % throttle in simh suggest that’s not important.
>>> I assume that turning that throttle up and down makes the emulated CPU
>>> go faster and slower. I have seen code using simple counters as delays
>>> but I assume that if you want precision you use the Kw11.
>>>
>>> With regards device responses I have found that going ’too fast’
>>> upsets code. If they do something that triggers an interrupt (set ‘go’
>>> for
>>> example) and the interrupt arrives too soon (like before the next
>>> instruction) they get surprised and can misbehave (you could argue
>>> that’s a bug, but that’s irrelevant). So always wait a few beats. But
>>> I assume there is no reason to try to precisely emulate the timing of
>>> , say, a disk drive. (The early handbooks state how awesome the async
>>> nature of the IO subsystem is cos you can swap out old for new and
>>> things just go faster).
>>>
>>>
>>> ___
>>> Simh mailing list
>>> Simh@trailing-edge.com
>>> http://mailman.trailing-edge.com/mailman/listinfo/simh
>>>
>>
>
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] A lost 18b peripheral

2020-05-25 Thread Timothe Litt

On 25-May-20 09:29, Bob Supnik wrote:
> Looking through the XVM/MUMPS15 listing, I saw that the timesharing
> terminals were attached via a DC01 communications multiplexer. There's
> no reference to such a device anywhere in the 18b literature on
> Bitsavers.
>
> Because the PDP15 is a contemporary of the PDP8/I and the KI10, I
> thought it might be a variant of their multiplexers, but I can't find
> a match.
>
> Another mystery.
>
> /Bob
>
Just for fun, I looked a bit further:

Here is the DC01-EB system exerciser test (diagnostic) in PDP-15 assembler:

https://ia800403.us.archive.org/28/items/bitsavers_decpdp15diSystemExerciser19DC01EB_2205397/19_DC01EB.pdf

It's rather entertaining reading.  Would not like to be in the room when
it runs (on 32 TTYs); it seems to mostly be good for finding crosstalk. 
But there's a lot to be gleaned from the code about how the DC01 works.

Computerworld 17-Nov-1971 has a classified AD for a PDP-15 with DC01-EB,
MUMPS, and other old friends:

Enjoy.

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] A lost 18b peripheral

2020-05-25 Thread Timothe Litt
On 25-May-20 09:29, Bob Supnik wrote:
> Looking through the XVM/MUMPS15 listing, I saw that the timesharing
> terminals were attached via a DC01 communications multiplexer. There's
> no reference to such a device anywhere in the 18b literature on
> Bitsavers.
>
> Because the PDP15 is a contemporary of the PDP8/I and the KI10, I
> thought it might be a variant of their multiplexers, but I can't find
> a match.
>
> Another mystery.
>
> /Bob
>
The DC01 was a CSS product.  Option/module reveals that it was adapted
for the 8, 9, 15 & 11.  You may find some hints in the other families,
though as you know, registers tend to have family ties...

  * DC01-AA 8 line scanner full duplex tty &/or EIA  \
  * DC01-AB 8 TTY scanner half duplex    \  8 POS
  * DC01-AC 8 Asyn scan full duplex EIA 3 cycle /
  * DC01-AL Full duplex line unit for DC01-AA
  * DC01-BB 8 TTY line scanner 1/2 duplex echo  PDP-9
  * DC01-EA 8 Line scanner full duplex (tty &/or
EIA) \
  * DC01-EB 8 TTY line scanner half duplex echo +
logic       \ 15
  * DC01-EC 8 Line async scan full duplex EIA 3
cycle    /
  * DC01-ED 8 Line async scan, half duplex echo tty or EIA Sep. speeds /
  * DC01-FA 8 line scanner full duplex (TTY &/OR EIA)  \
  * DC01-FD Improved DC01-FA  \ 11
  * DC01-FJ 32 line scanner, full duplex, can use any DF11 interface

Responsible engineers: B. Vachon, Bill Weiske, D. Hopkins, J. Larkin,
Reg Wetherall, J. Stefanowicz, Russ Moore

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] OpenVMS time conversion routines

2020-05-07 Thread Timothe Litt
simtools/extracters/ods2/vmstime.c


On 07-May-20 01:28, Baker, Lawrence M wrote:
> Does anyone know of any portable OpenVMS 64-bit time conversion
> routines written in C?  I.e., that do not depend on 64-bit data types
> so they run on 32-bit machines?  Maybe in the SIMH GitHub?  Out there
> in the Interland?
>
> I am writing a simtools converter that combines on-disk OpenVMS Backup
> save sets into a SIMH .tap image of an OpenVMS Backup ANSI tape
> volume.  I want to use the date the backup was done from the Backup
> save set header for the ANSI HDR1 Creation Date.
>
> You might ask why?  Lately I have had to restore Backup save sets
> stored on our NFS file server to SIMH VAXes over a DECnet/DAP-to-NFS
> gateway I built a number of years ago.  (I wrote to this group about
> it in a thread about RSTS/E 10.1-L and Paper tape on January 6, 2016.)
>  It takes about 2 days to restore an ~8GB disk image backup from the
> NFS server, though the gateway running on an SheevaPlug ARM SoC, to
> the SIMH VAX running on my desktop iMac.  I am working from home at
> the moment, of course.  I have become good friends with GNU screen
> because of SSH inactivity disconnects and VPN failures.  When I tried
> to restore a 75GB disk, expecting it to take 10-14 days, our
> "friendly" IT security monsters rebooted my iMac on me after 4 days.
>  Grrr.  I want to try breaking the NFS file server transfer step from
> the SIMH VAX restore operation.  OpenVMS is not so easy as RSX was to
> read /FOREIGN disk drives as files.  I could not figure out a way to
> just MOUNT a Backup save set as a SIMH disk image and get that to
> work.  I was able to use Mark's tar2mt converter and, using the proper
> OpenVMS MOUNT /RECORDSIZE and /BLOCKSIZE qualfiers, was able to read a
> Backup save set from an unlabeled tape image.  Labeled tapes are
> easier to use then unlabeled tapes, since the file names and file
> formats are on the tape with the file data.  I know how to write ANSI
> tape labels, so I have taken it upon myself as a challenge to write a
> converter.  I think this is the last piece I need for what I want it
> to do.  I'll certainly announce it when it is done.
>
> Thank you in advance for your help.
>
> Larry Baker
>
> US Geological Survey
>
> 650-329-5608
>
> ba...@usgs.gov
>
>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Is it possible to simulate the first Vaxen I ever used?

2020-03-23 Thread Timothe Litt

On 23-Mar-20 15:29, Eric Smith wrote:
> On Mon, Mar 23, 2020 at 1:01 PM Timothe Litt  <mailto:l...@ieee.org>> wrote:
>
> The PDP-11s, like the 11/05 were ttl, the CPU ALU was something
> like 74181 4 bit slices.  I don't count them as "real" microcoded
> machines for some reason - perhaps because the ucode was all ROM,
> and IIRC there was no assembler for it. 
>
>
> All PDP-11 models except the original KA11 CPU used in the PDP-11/20
> were microcoded. There were microassemblers for many of the models,
> but for most models the microassemblers never were provided to
> customers, and were probably not readily available even inside DEC.

I didn't see them, but that wasn't my world.  I do remember that
printouts of the ROMs were in some of the printsets.  And some of the
people I talked with said that the ROMS were hand generated.  For some
value of "hand".  Anyhow, right or wrong, I thought of those microcodes
more like PLA equations than "microcode" as used in the KL, IBM, and
other systems where there was significant flexibility.

> Later, general-purpose microassemblers were used, such as MICRO and
> MICRO2. (It appears to me that there were two different DEC
> microassemblers that both used the name MICRO2; the one from LCG
> appears to be significantly different than the one used for VAXen.)
>
Micro came first, developed by Tim Leonard for the KL-10 in PDP-10
assembler.  It introduced the constraint syntax, comma-separated fields,
defaults, / for contents (like DDT) and most of the pseudo-ops.  It
knows about two simultaneous control stores - U and D.  (You need that
when one contains a dispatch address in another.)  It was also used for
the KS10.

Meantime, Micro2 started as a clone re-written in BLISS for the VAX.  It
uses somewhat different syntax, handles more simultaneous control
stores, generally more flexible.

I don't recall there being a second LCG-developed Micro2 - however, we
did use Micro2 for Jupiter (IIRC) and the VAX 9000 (for sure).  (Not
sure about the 8600; I didn't have much to do with it.)  It is possible
that it forked - at times and for some tools there was cooperation
between the CAD groups.  At other times, not so much.  DECsim went back
and forth.  Microcode tools tended to be frozen early in a project - too
much risk of breakage by changing them (including taking latest updates)
mid-stream.  Micro2 definitely kept evolving.  You may have seen such a
frozen snapshot.

> The 11/03 (AKA LSI-11) and the 11/60 were the only PDP-11 models for
> which DEC actually "supported" customer-written microcode, for small
> values of "support": a WCS, microassembler, and microcoding
> documentation were sold, but beyond that you were on your own.

Yup - I'd add "reluctantly" to "sold".  Wasn't there a WCS option for
the 11/34?  Memory fails..

> The 11/03 and 11/60 had sufficiently general hardware data paths to
> make custom microcoding worthwhile; for most other PDP-11 models the
> data paths were very limited, reducing the utility that a WCS would
> have provided. That's why the aftermarket WCS option for the PDP-11/40
> also added additional hardware such as a field extractor and a
> hardware stack, with the microword width increased by 24 bits to
> control the addtional hardware.
>
I'm not sure what the market was for WCS - at that time, attached FP
boxes (e.g. Floating Point Systems) were in vogue, and probably provided
better acceleration for most purposes.  For real-time, the PDP-11 pretty
much kept up with the outside world.  You could maybe eliminate some bus
traffic/memory references - but in development time and cost of moving
to the next generation, it never seemed like a win.  Plus there was a
mystique about ucode - the tight coupling to hardware took some rare
talent to actually make it work.


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Is it possible to simulate the first Vaxen I ever used?

2020-03-23 Thread Timothe Litt

On 23-Mar-20 14:36, Paul Koning wrote:
>
>> On Mar 23, 2020, at 1:29 PM, Timothe Litt  wrote:
>>
>> ...
>> On the VAX 730: as far as I'm aware it's the only VAX built out of
>> standard LSI CPU components. The guts of the CPU is AMD 2901
>> bit-slice chips. All other DEC microprogrammed machines I can think
>> of had their own purpose-designed logic. 

The PDP-11s, like the 11/05 were ttl, the CPU ALU was something like
74181 4 bit slices.  I don't count them as "real" microcoded machines
for some reason - perhaps because the ucode was all ROM, and IIRC there
was no assembler for it. 

The KS10 uses 2901 for its CPU - because of the 1/2 word arithmetic,
it's actually 40 bits wide.

The 780 is TTL.  The 785 is the same machine, upgraded to 74S.

The UDA data mover used 2901s so cleverly that it went into several more
generations of disk controllers.  It  timeslices the 2901 such that it
runs two programs - one on the A and one on the B phase of each clock cycle.

> 2901 bitslices do appear in other DEC products, the UDA comes to mind.  And 
> that has, for running on-board diagnostic tools, a small PDP11-like 
> instruction set implemented in a little bit of 2901 microcode.  By Richie 
> Lary, wizard of compact software...
>
>   paul
>
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Is it possible to simulate the first Vaxen I ever used?

2020-03-23 Thread Timothe Litt

On 23-Mar-20 13:53, Eric Smith wrote:
> On Mon, Mar 23, 2020 at 11:35 AM Robert Armstrong  <mailto:b...@jfcl.com>> wrote:
>
> > Timothe Litt mailto:l...@ieee.org>> wrote:
> > KS10 ... The 8085 code is crammed into UV EPROMs.
>
>   Was all of the KS CFE code in EPROM?  On the 730 only a small
> kernel of 8085 code (about 2K as I remember) was in ROM/EPROM and
> the rest of the 8085 memory was RAM.  The first thing the 8085 did
> at power on was to load the rest of the 8085 code from the TU58. 
> That made it possible to issue updates to the CFE code as well as
> the microcode.
>
>
> All the KS10 front end 8080 code (not 8085!) was in EPROM, up to four
> 2716 EPROMs for 8KB of code. There only RAM was two 2114 chips (each
> 1Kx4), for 1KB of RAM. The 8080 code would load the KS10 microcode
> from mass storage.
>
Typo on my part.  You are correct, the KS CSL is an 8080.  And all 4
EPROMs are full.  The code uses INT instructions with a function code
following to save bytes on subroutine calls.  Yet it provides full
remote diagnosis support, as well as a lot of RAMP (Reliability,
Availability, Maintainability, and Performance) features that were a
challenge on the KL.  The KL has a whole 11/40 FE with an OS...

Did I mention that when one of my colleagues came back from (LCG)
European DECUS when RAMP was announced, he reported that after the
session a helpful customer pointed out that in Dutch, "ramp" means
"disaster"...


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Is it possible to simulate the first Vaxen I ever used?

2020-03-23 Thread Timothe Litt

On 23-Mar-20 13:35, Robert Armstrong wrote:
>> Timothe Litt  wrote:
>> KS10 ... The 8085 code is crammed into UV EPROMs.
>   Was all of the KS CFE code in EPROM? 

Yes, it is.  There are a couple of microwords in the CRAM that deal with
console responses (single step, console interrupts for the serial
lines).  But that's it.

The KS can, and does initiate some bus cycles to get an RH11 tape or
disk to load a record into PDP-10 memory.  That record contains the CRAM
ucode.  The 8080 reads it from -10 memory and load the CRAM.  And once
it's a PDP-10, the next stage of boot (again, a few instructions) loads
the monitor bootstrap.

The CSL has no dedicated mass storage - and that was an issue of
reliability even more than cost.  The other issue is that CSL didn't
have a lot of space.  At the time, you wouldn't put DRAM on an 8085. 
(And if you did, you needed yet another few chips of refresh controller,
address decoders, etc.)  SRAMs were small and expensive.  And once you
took up the space for an EPROM socket, you might as well make it big
enough to hold everything...

One nice thing about the KS is that it was very well documented.  (Had
to do something with the time while internal politics prevented its
release before the 780...)  There's a copy of the technical manual on
bitsavers, which you can read for more entertainment.

>  On the 730 only a small kernel of 8085 code (about 2K as I remember) was in 
> ROM/EPROM and the rest of the 8085 memory was RAM.  The first thing the 8085 
> did at power on was to load the rest of the 8085 code from the TU58.  That 
> made it possible to issue updates to the CFE code as well as the microcode.
>
> Bob
>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Is it possible to simulate the first Vaxen I ever used?

2020-03-23 Thread Timothe Litt
On 23-Mar-20 13:09, Eric Smith wrote:
> On Mon, Mar 23, 2020 at 7:34 AM Robert Armstrong  > wrote:
>
> The 730 was interesting in that ALL of the CPU microcode was in
> RAM and was loaded by the CFE at boot time.  It was possible to
> locally modify the 730 microcode, and DEC even had a set of
> microcode development tools for the 730.  I've never seen them
> except in references.
>
> [snip]
> The first two DEC machine to use entirely RAM control store were the
> KL10 and KS10 36-bit PDP-10 CPUs, in 1975 and 1978, both used in
> DECsystem-10 and DECSYSTEM-20 systems. The KL10 was the biggest and
> most expensive PDP-10, and the KS10 was the smallest and least
> expensive.  The earlier 36-bit CPUs, the 166 (PDP-6), KA10, and KI10,
> were hardwired rather than microcoded.
>
Yes, with module and wire-wrap backplanes. The could be (and were) ECOed
in the field.

The KL10 was the first microcoded machine.  And first and most
complicated in other dimensions.  Bugs were expected, and reducing the
cost of dealing with them was a consideration.  It was a good call. 
Besides that, having a fully programmable microstore allowed us to do
functional upgrades too.  Like the CI/NI, and the GFloating support. 
GFloat was pure ucode.

> Since the KL10 was DEC's biggest, most expensive machine at the time,
> it wasn't nearly as cost sensitive as their other CPUs, so there
> probably wasn't even any consideration given to using PROM for the
> control store.

I don't think you could have found fast enough PROMs.  The KL is an ECL
machine.  The RAMs are ECL, and the timing is hairy enough that the
modules are only populated 1/2 way back - to fill the board would break
timing.  The boards also have loops of etch that serve as delay lines. 
Manufacturing would short the loops at the right place for each board -
depending on how the board (and RAMs) turned out.  Today we take for
granted process controls and tolerances that were, if not unattainable,
unaffordable at that time.


>
> The decision to use RAM control store on the KS10 is less obvious. It
> was still an expensive machine, maybe 20% or 25% of the cost of a
> KL10, so it may still have not been considered cost sensitive, and the
> benefits of easy microcode updates may have been considered more
> important than they would have been on e.g. the PDP-11 CPUs.
>
We learned from the KL that it was worthwhile.  TCO (DEC's as well as
the customers').

The KS did compromise, however.  While the control store is all RAM, the
dispatch microcode is in metal PROMs.  I still have a few.  And, I found
a bug in a privileged instruction that required a DROM change to fix. 
We redefined the instruction instead.  Luckily, no other issues effected
the DROM, though it did prevent adding some instructions that the KL had
(and the OS liked).

> The KS10 control store RAM has parity, and RAM parity errors were
> apparently fairly common. A control store parity error would halt the
> CPU and interrupt the 8085 front end, which would reload the control
> store and restart the machine. DEC patented that.
>
>
The 8085 code is crammed into UV EPROMs.   Quite a few coding tricks to
save bytes and make it fit.  It required several rounds of changes -
which meant swapping boards.  FLASH at that time was not available in
the required densities, was insanely expensive - and had an annoying
habit of being forgetful.  I have some neat stories about the latter -
for another day.


> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Various

2020-02-14 Thread Timothe Litt

On 13-Feb-20 20:57, Johnny Billquist wrote:
> On 2020-02-14 01:35, Timothe Litt wrote:
>> On 13-Feb-20 19:21, Johnny Billquist wrote:
>>> On 2020-02-13 17:42, Clem Cole wrote:
>>>>
>>>>
>>>> On Thu, Feb 13, 2020 at 11:38 AM Clem Cole >>> <mailto:cl...@ccc.com>> wrote:
>>>>
>>>>     I think I saw a card read/punch only once on a PDP-6 IIRC, but it
>>>>     might have been a KA10.   I don't think I ever saw one on a
>>>> PDP-8/11
>>>>     or Vaxen.
>>>>
>>>> The more I think about it, there must have been one or two in the
>>>> mill or the machine room in MRO, but I just can not picture them.
>>>
>>> As far as I know, there was no punch for the PDP-8 or PDP-11.
>>> However, there were readers.
>>>
>>> And the PDP-11 reader controller sat on the Unibus, so it would not
>>> be hard to get it working on a VAX either. If that was officially
>>> supported or not I don't know, though.
>>>
>>> There were a bunch of PDP-11 Unibus peripherals that was never
>>> supported on a VAX. DECtape comes to mind, as well as RK05.
>>>
>>>   Johnny
>>>
>>
>> See my previous note.
>
> Came to yours later...
>
>> The punches you mention do exist, as do others (Not particularly
>> common or popular):
>>
>>   * PDP-11: CP11-UP Punch interface for Univac 1710 Card RDR/PUNCH
>
> Was that a CSS product perhaps? Even looking at the PDP-11 Peripherals
> handbook from 1976 don't mention it. There is only CM11, CR11 and
> CD11. All three are card reader only.

Special Systems, California. Responsible design engineer: Bob Edwards

Don't read anything into "Special Systems" - CSS just means "low volume"
- CSS would sell to anyone, though if a customer's request seemed
unique, the first (sometimes only) customer would pretty much pay the
NRE.  "Low volume" is relative - in the late 80s, line printers were CSS
products.

Can't say much about the CP11's volume - I only saw one.  I expect it
was low.

>
> Haven't manage to find anything on bitsavers yet, but there are a
> bunch of places to search, so I might just have missed it.
>
>> Card readers were sold and supported on all systems thru VAX.
>
> Thanks for clarifying that for me. I wasn't at all sure about the VAX.
>
>> Someone wrote a DECtape driver for VAX - I think Stan R., though it
>> wasn't supported.  DECtape controllers are odd devices - the TD10 is
>> reasonably smart, but the others put realtime constraints on the
>> drivers that could be hard to meet.  Anyhow, by the time the VAX came
>> out, TU58 and Floppies were cheaper and denser media.
>
> I actually do remember seeing it. Fun thing. :-)
>
>> There was also an unsupported DECtape driver for TOPS-20.
>
> KLs with DECtape was always only Tops-10?

Yup.

TOPS-20 had no official support for any IO bus device - except the AN20
(ARPAnet/IMP interface).  Except in that case, the DIA/DIB20 was
difficult to get on the 20 - it was standard on the 10.

However, several drivers for IOB devices existed.  Including the card
reader/punch.

The issue was simply that the IOB had been superseded by MASSBUS (for
DMA devices - disk, tape) - the DF10 channels were expensive in $ and in
memory ports.  For most unit record & Comm, cheaper and less overhead to
hang on the PDP-11 front end.  Unibus, and the drivers made the devices
smarter (and cheaper).  E.g. The 11 handled DMA, modem control, even
broadcast messaging.  And card images.  IOB card readers interrupt the
-10 for every column.  Even with BLKI in the interrupt locations, this
was annoying.  A typical IOB controller would be several rows of
modules, plus power and cooling.  Just the IOB paddle cards used more
backplane space than a Unibus SPC slot.

DECtape, the TD10, is an IOB (but not DMA) device.  Thus, no support. 
Customers who screamed loudly enough and were migrating from TOPS-10
could make it work- at the price of a DIB20 (a full cabinet) and a
TOPS-20 source kit.  When they heard the prices, most swallowed hard and
moved their data to disk or 9-Track.  The problem, of course, is that at
the time there was no replacement "personal media" on the -20 -- the FE
floppies (RX01) were not accessible to the OS, and there was no TU58
(even un-)support on TOPS20.

Both university and engineering shops liked personal media - mostly to
reduce demand for and clutter on expensive disk space.  But TOPS-20
management knew better.

>
>   Johnny
>
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Of DEC and cards

2020-02-13 Thread Timothe Litt
In the TOPS-10 world, ANF-10 RJE stations could have card readers and
printers.  The TOPS-10/20 DN200 also.

But most "RJE" station software on the DEC side made the foreign
mainframe look like a batch queue, and you would submit a file to that
queue.  The software (typically DEC 2780/3780 emulation for
{TOPS-10,TOPS-20, VMS, ...}) would send the file to the mainframe from
an imaginary card reader; results similarly to an imaginary printer
(ending up in a .log or other file).  I suppose "virtual" would be the
modern word for "imaginary", but it comes to the same thing :-)

On the KL based systems, the software was a combination of PDP-11 front
end code (a dedicated DN20) and code running on the KL.  The KS used a
KDP, though there was also a "DN22" remote station.

I don't know exactly what UNIX did - wasn't in that world much then. 
But I wouldn't be surprised if the strategy was similar - user prepares
a file, software does the code conversions to/from EBCDIC, and the usual
lies told (er, device emulation performed) in both directions...  That
would certainly have led to the emulation work you recall - especially
given the fluid definitions of character sets at the time.  I don't
recall the same efforts to offload development to UNIX as to the DEC
proprietary systems - IIRC, compilers for legacy languages (COBOL, RPG,
PL/I) came to UNIX rather later, and with less rich/performant
implementations. 

In my experience, physical card equipment, as previously noted, was
either a legacy/migration requirement, or simply a bureaucratic legacy
"requirement".  The DEC value proposition was that cards were expensive,
awkward, slow, and painful to create, modify/debug with.  Interactive TS
solved those problems; the emulations were a medium of exchange between
the legacy/enterprise systems and the more productive DEC systems. 

Readers: quite common.  Punches, much less so.

On 13-Feb-20 13:37, Clem Cole wrote:
> One last reply here, but CCing COFF where this thread really belongs...
>
> On Thu, Feb 13, 2020 at 12:34 PM Timothe Litt  <mailto:l...@ieee.org>> wrote:
>
> OTOH, and probably more consistent with your experience, card
> equipment was
>
> almost unheard of when the DEC HW ran Unix...
>
> You're probably right about that Tim, but DEC world was mostly
> TOPS/TENEX/ITS and UNIX.  But you would think that since a huge usage
> of UNIX systems were as RJE for IBM gear at AT  In fact, that was
> one of the 'justifications' if PWB.  I'm thinking of the machine rooms
> I saw in MH, WH and IH, much less DEC, Tektronix or my
> universitytime.  It's funny, I do remember a lot of work to emulate
> card images and arguments between the proper character set
> conversions, but  I just don't remember seeing actual card readers or
> punches on the PDP-11s, only on the IBM, Univac and CDC systems. 
>
> As other people have pointed out, I'm sure they must have been around,
> but my world did not have them.
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] [COFF] Of DEC and cards

2020-02-13 Thread Timothe Litt
On 13-Feb-20 14:57, Noel Chiappa wrote:
> > From: Clem Cole
>
> > I just don't remember seeing actual card readers or punches on the
> > PDP-11s
>
> I'm not sure DEC _had_ a card punch for the PDP11's. Readers, yes, the CR11:
>
>   https://gunkies.org/wiki/CR11_Card_Readers
>
> but I don't think they had a punch (although there was one for the PDP-10
> family, the CP10).
>
> I think the CR11 must have been _relatively_ common, based on how many
> readers and CR11 controller cards survive. Maybe not in computer science
> installations, though... :-)
>
>   Noel
Not common, but yes:

CP11-UP Punch interface for Univac 1710 Card RDR/PUNCH

Before anyone asks, there were also:

CP08-(N,P) (CSS) Data Products Speedpuch 120 100 CPM Punch and controller

CP10-(A,B) MD6011 300 CPM CARD PUNCH & Controller (60,50 Hz)

CP15-(A,B) Ditto for the -15

and

CP20-E (The CP10 for orange boxes)

There were a few other part numbers, especially for the 10/20, which
included various mechanical options - e.g. racks for the controllers vs.
just the controller, colors, etc.  What's listed are the main models for
each family.

The CP01-(A,B) "Documation LC15 Model 2 Card Puhcn, 80 Col, 100CPM,
RS322 interfaces, ASCII or Imaged mode 100-9600 BAUD

Not sure what platforms used the CP01...


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Various

2020-02-13 Thread Timothe Litt
Thanks.  Me too. 

Might be a regional dialect - like "Hi" vs. "Hey" or "labor" vs.
"labour"  Not worth arguing, especially since Al has been the savior
(saviour) of so many bits :-)

On 13-Feb-20 14:38, Bob Eager wrote:
> It was 'chad' back in 1971 when I was using punched cards.
>
> On Thu, 13 Feb 2020 08:37:15 -0800
> Al Kossow  wrote:
>
>>>  "chad bin full"  
>>
>> "chips" not "chad" no matter what the Y2000 revisionists insist on
>> saying.
>>
>>
>>
>> ___
>> Simh mailing list
>> Simh@trailing-edge.com
>> http://mailman.trailing-edge.com/mailman/listinfo/simh
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Card Readers on PDP-11's

2020-02-13 Thread Timothe Litt
Yes, and until ~ the 1442, the IBM readers used a row of metal brushes
to read the cards (they'd go through the holes and contact a metal plate
on the bottom).  All the early IBM gear was mechanically interesting - I
suppose from their origins.  Springs, brushes, cams, levers, solenoids,
microswitches, belts and motors!  A mechanical engineering degree was
probably more helpful than an EE (or later CS)...

The 1442 (ish) introduced optical (incandescent lamp + photocell)
readers; Documation also was optical.  IIRC, it also had a taller path,
so was more tolerant of warpage/curled edges.  Toward the end, the light
source became LEDs. 

Anything that avoided contact with the cards helped reliability.  The
challenge for the designers was to have tolerances large enough to avoid
jams, but small enough to prevent picking more than one card, or
allowing skew in transit.

The off-line card sorters and reproducing punches (with the wired
plugboards, often used with mark-sense) were also mechanical marvels -
or monsters - depending on which side of the Field Engineers' toolbox
you stood.

On 13-Feb-20 13:00, Paul Koning wrote:
>
>> On Feb 13, 2020, at 12:17 PM, Robert Thomas  wrote:
>>
>> ...
>> The Documation card reader was fairly reliable and didn't chew up as many 
>> cards as the IBM reader.
> I can believe that.  The IBM card readers and punches I've seen (on a 360/44) 
> had a pick mechanism that moves a metal block with a small step in it, sized 
> to match the nominal thickness of the card.  This was supposed to catch the 
> far edge of the card and *push* it into the throat of the feed mechanism.  If 
> there was anything slightly wrong, it would accordeon-fold the card instead.
>
> The Documation readers had vacuum operated pick mechanisms that acted on the 
> leading edge of the card.
>
>   paul
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Of DEC and cards

2020-02-13 Thread Timothe Litt
I don't have numbers, but I do have experience. 

I ran into quite a few card readers on DEC gear - fewer (but some) punches.

8s, 11s, 10s, 20s, and VAX - they were gone by alpha.

The card readers were most often specified for migrations - people who had a
business process, or even just software to move from a card environment.
Perhaps a 360/70, or 1130.  Remember that even into the early 80s, phone
bills came with an "IBM" card that went back with your check; Mass driver's
licenses were created with punch cards...

I sometimes thought that a card reader should have been bundled with the
COBOL and RPG compilers...There were some RFQs where it was clear that
you had to have a card reader just to check a box - you'd go to the site
years
later and see layers of undisturbed dust on the dust cover :-)

Then there were the people who used DEC gear to write (and especially debug)
jobs that would run on the more expensive mainframes.

The compilers (FORTRAN, COBOL, RPG) all had strict card image modes as well
as looser "interactive" modes.  Not just standard conformance, but to deal
the the card sequence fields at the right end.

The CR10A would read 1000 cards/min (833 at 50 Hz) - and was noisy enough
to be heard over the fans/AC of a 10 in a machine room.  It was rather
finicky;
the Documations were more reliable.  Most models on the 11s were slower,
but IIRC there was a 1,200 CPM model. 

The CP10 did a whopping 200 CPM - or 365 if only the first 16? columns were
punched.  Punching was always slower - and mechanically more challenging.

The 10/20 MPB and GALAXY batch systems supported the model of preparing
jobs on cards - and feeding them in a continuous stream.  Some university
environments used that into the 80s.  Being DEC, the "JCL" was trivially
simple;
nothing like the IBM nightmares of complexity. 

OTOH, and probably more consistent with your experience, card equipment was
almost unheard of when the DEC HW ran Unix...

On 13-Feb-20 11:38, Clem Cole wrote:
>
>
> On Thu, Feb 13, 2020 at 10:50 AM Timothe Litt  <mailto:l...@ieee.org>> wrote:
>
> Among others, DEC OEM'd Documation card readers.
>
> https://www.youtube.com/watch?v=se0F1bLfFKY
>
> Mark - sorry to go a little direct (simh) topic here [this sort of
> belongs on Warren's COFF mailing list), but since the Card discussion
> started here as I'm kinda curious and will ask it.
>
> Did DEC actually sell that many?   In my years of working around DEC
> gear starting in the late 1960s, I think I saw a card read/punch only
> once on a PDP-6 IIRC, but it might have been a KA10.   I don't think I
> ever saw one on a PDP-8/11 or Vaxen.
>
> I certainly saw and used them on IBM 1401/360 systems, the
> Univac 1100s and CDC's.  I have not so fond memories of the IBM 1442,
> much less a 26 and 29 keypunch (and a couple of great stories too). 
>
> That said, when I think of DEC gear, my memories are of paper tape or
> either the original DEC-Tape units or a couple of cases the old
> cassette tape units DEC had on some of the laboratory PDP 11/05s.
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Various

2020-02-13 Thread Timothe Litt
Among others, DEC OEM'd Documation card readers.

https://www.youtube.com/watch?v=se0F1bLfFKY

And old friend - the 1442 reader/punch:

https://www.youtube.com/watch?v=w62NC1R6WLs

And with the covers open

https://www.youtube.com/watch?v=gfRxpmiScPA

I didn't find audio of the punch - which was quite noisy (and slow).

Note the dual output trays - program selectable.  Often one used for
accepted data, the other for rejects.

Operators interested in throughput (or lunch) would, contrary to
instructions, try to load and unload while the reader/punch was
running.  This could produce entertaining results.  (Jams - and avian
cards...)

So could the first time an operator encountered the "chad bin full"
error condition...


On 13-Feb-20 10:27, Richard Cornwell wrote:
> Hi Mark,
>
>> Any good simulation would have to include the semi-real I/O
>> instructions RCC (Read and Chew Card) and DPD (Drop and Pie Deck).
>   I considered adding these to my card simulation. I could also add
>   the feature that it will once in a while overwrite the currently
>   reading card with random junk.
>
>   sim_card, basically give you translation from the various formats
>   into a punched image of the card. Or it takes a punched image of a
>   card and translates it to ASCII or other formats. It can also
>   auto detect most common card deck format. Currently supported formats
>   are ASCII, CBN, Binary, card, EBCDIC. Also if it can't translate a
>   card to ASCII it will generate an ~raw card with octal values.
>  
>> I'm with you about never again struggling to remove a card from the
>> read gate that had been converted to a mini-accordion or measuring
>> the size of a progrram in boxes, not bytes.
>>
>> I'm traveling for several weeks, but when back home I will assist Ken
>> in getting an SDS driver for the reader/punch if he hasn't completed
>> the task by then.  All needed documentation is in the 940 Reference
>> Manual.
>Let me know if you have any questions or need things added. sim_card
>is currently used by all of my simulators. You might need to tweak
>the translation tables, if so let me know.
>
>> I wonder if anyone has sound recordings of a reader/punch?  That
>> would be a nice addition to a blinkenlights implementation, which is
>> on my To Do list.
>I am sure we could get some clips.
>
> Rich
>  
>>
>> ⁣Get BlueMail for Android ​
>>
>> On Feb 13, 2020, 6:51 AM, at 6:51 AM, Bob Supnik 
>> wrote:
>>> 1. I can confirm that RT11 V5.3 INIT does not work properly with an
>>> RL02 
>>> in 3.10.
>>>
>>> My next step is to trace back changes, because I think it used to
>>> work.
>>>
>>> 2. There's no card reader for the SDS 940 because
>>>
>>> a) I hate card readers (from having used them way back when)
>>> b) I thought there wouldn't be any demand
>>>
>>> Rich Cornwell's library should make it easier to implement a card
>>> reader
>>> these days.
>>>
>>> My first card reader story goes back to an RCA Spectra 70 I used in
>>> 1965.
>>> It had a vacuum pick reader for high speed operation. The reader
>>> would gradually curl the front edge of the cards, so that after two
>>> or three passes, the deck was unreadable. It's failure mode was to
>>> spit cards out,
>>> past the receive hopper, at very high velocity and scatter them ten
>>> or fifteen feet out on the floor...
>>>
>>> The second was a very slow mechanical reader on a PDP-7 in 1966. The
>>> only other keyboard device was a Teletype, so initial entry of
>>> programs was done from punched cards. It read, allegedly, 100 cards
>>> per minute using mechanical fingers with little star wheels on the
>>> end. DEC field service was in almost every week tuning or fixing the
>>> damned thing so that it could actually handle a decent-sized deck.
>>>
>>> In my experience, only IBM built decent card readers. The
>>> reader/punch on the 1620 (I used one in 1964) was very sturdy, and
>>> the 407 (used for offline printing of punched card output) could
>>> read almost anything.
>>>
>>> /Bob
>>>
>>>
>>> ___
>>> Simh mailing list
>>> Simh@trailing-edge.com
>>> http://mailman.trailing-edge.com/mailman/listinfo/simh  
>
>
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Paul Pierce's Collection page

2019-11-10 Thread Timothe Litt
Don't know about the current state (I get timeouts), but it's on the
wayback machine.  Recent captures are 404s, but you can go back to ones
with content.

E.G.
https://web.archive.org/web/20181104021713/http://www.piercefuller.com/library/system.html

https://web.archive.org/web/20181104021628/http://www.piercefuller.com/library/index.html

On 10-Nov-19 08:04, Timothy Stark wrote:
>
> Folks,
>
>  
>
> What happened to Paul Pierce’s collection website?  I googled it and
> tried to access it.  It was gone.
>
>  
>
> http://www.piercefuller.com/collect/  That contains 1401, 700 and 360
> software that I am looking for. They have many diagnostics for 1401,
> 700 and 360 emulators.
>
>  
>
> I am still looking for OLTEP and FRIENDS diagnostics to test IBM
> 360/370/390 emulators.
>
>  
>
> Tim
>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Fwd: VAX + Spectre

2019-09-17 Thread Timothe Litt
The VAX 9000 does branch prediction & speculative fetches; kills,
aborts, and register logs made for debugging fun.  "A cache cycle wasted
is lost forever" met "waste not, want not".  It fetches aggressively. 
It has multiple microcodes - some loadable, other compiled (and
optimized) into gates.  I was the last person to touch the EBox
microcode.  I haven't studied the CVEs - but my understanding is that
the issues come from cross-process leakage, which is most severe with
virtual machines.  Or can happen if you get a hostile process onto a
server.  Although the VAX architecture specifies a VM mode, it didn't
get much (if any) traction.  The group that specified it did worry about
covert channels, including from cache timing.  It seems unlikely that
hostile processes would appear on VAXes - assuming that you've turned
off the DECnet TASK object.  After all, you don't have bloated web
browsers.  NVAX borrowed a lot from the 9000 - but I don't have those
documents handy - I had little to do with it.  (Fun fact: NVAX
simulation was run on 9000 reliability qualification machines - for speed.)

Alpha has branch prediction - type and effectiveness evolved with the
processors.  Architecturally, Alpha uses a much more forgiving memory
ordering model.  Out of order and speculative execution, store buffers,
speculative loads, explicit memory barriers.  It doesn't have microcode,
though PALcode has some of its attributes.  Again, I haven't looked at
the CVEs in detail.  VMS Software claims to have done an analysis -
http://vmssoftware.com/pdfs/news/Customer_Letter_2018_Meltdown_Spectre.pdf
They certainly hired a number of OS experts - but I don't know if that
includes the microarchitectural expertise that would be required to be
definitive.  Their letter seems to indicate black box testing...

For both, remember that by today's standards, these are slow CPUs.  That
cuts both ways.  Also, "verifying microcode" (or the gates & sequencers)
is more than "looking at it".  Understanding is tough.  Even focused
testing is non-trivial.

I would include the environment in any risk assessment - these attacks
(and for that matter, ROWHAMMER and other attacks) require a parallel
process/OS to work.  If you can prevent that, as I understand them, they
won't work.  Since VAX (and Alpha) aren't used in the shared hosting
(VM) world, to a first order, the risk seems very low.  As for hostile
processes on your OS instance - maybe in the days of general timesharing
with an uncontrolled user base compiling code.  But is that the way
they're used today?  Or are they legacy back-end systems with a sane
user base?  And is there a sufficiently motivated and resourced attacker
who could do something with, say, a webserver's private key?  (Once you
have it, you need a DNS and/or routing attack to get your false server
to take traffic; for SSH, one would hope you use a firewall/other
secondary defenses.  And it may be more effective to simply change keys
frequently.  Anything much longer than a key would take a long time to
extract  - and you have other issues if that goes unnoticed.)  And, even
assuming GCC was updated - how many people will (can?) recompile the
legacy application base?  And libraries?

It's rational to look at these - just don't overreact.  The mitigations
have costs, and preventing intrusions may be a better investment.

On another subject, a while back someone mentioned the 34 bit (25-bit
PFN) physical memory ECO to VAX.  The 9000 DID implement it, and my
group created an off-trunk build of VMS that exploited it.  The ECO was
not popular with the OS groups, because it used bits in the PTE (Z & 3
software bits) that needed a new home - a byte that had to go somewhere,
which broke assumptions about data structures.  It turned out that based
on that opposition, the cost of DRAM, and the cost of fixing a
late-breaking bug, we decided to drop support (and supported the full
21-bit PFN - 30 bit PA) instead.  This still required a hardware change
- but it was much smaller.  So 34-bit mode did get implemented, and
almost got to the field - and the DMA devices that read PTEs (for DMA)
had to (and did) know about it...

On 17-Sep-19 14:01, Clem Cole wrote:
> I can simplify the question a bit.  I have to be careful as I work for
> Intel and I've been involved with a small bit of it on our end and
> some of the lawyers are a bit touchy about the whole situation.   So I
> need to add - these opinions are my own not necessarily my employers.
>
> Basically, if you have a CPU and microcode design that is post the IBM
> AGS that uses any type of branch prediction or speculative execution,
> the processor is now suspect.  But you need to do the analysis
> originally proposed in the German paper. It helps to have the google
> teams work next to you when you do that analysis because you now have
> a recipe for how to apply the ideas, but that is not the only way to
> apply them.  Before then, nobody had thought about the problem.
>
> While 

Re: [Simh] Limits on MSCP controllers

2019-06-24 Thread Timothe Litt

On 24-Jun-19 15:18, Clem Cole wrote:
>
>
> On Mon, Jun 24, 2019 at 2:13 AM Timothe Litt  <mailto:l...@ieee.org>> wrote:
>
>
>>
> MSCP is basically an evolution of the Massbus protocol, on a
> different PHY, and informed by the TOPS-10/20 experiences with
> high-end  (large, redundant, on-line serviceable) configurations. 
>
> That was certainly my impression/memory.
>
As is often the case, things turn out to be complicated.  Here's a more
detailed version.  In an off-list note, Bob pointed out that MSCP
originated in a project he managed that was to develop the "next
generation" disk controller - a forerunner of the UDA.  That project
didn't start with "let's replace Massbus"; its premise was "it's
necessary to move host functions to the controller to support high(er)
density disks.  Especially ECC and error recovery".  It's original PHY
was the memory interface model of the PDP-15/PDP-11 coprocessing pair
(UC15), which became the UQSSP.  Bob reported that he took this project
from concept through the UDA50 prototype - at which point the group
moved to Colorado (and he didn't).

I didn't mean to suggest that MSCP started as an explicit effort to
model/replace/extend Massbus.  However, if you extract the Massbus
protocol from the PHY (OSI L1) and registers, and consider the entire
DSA the similarities are striking.  I wasn't there at the beginning, so
I can't say when/how that came to pass.  But I can say that the earliest
public descriptions of DSA that I attended (mid-70s) described DSA (the
architecture initially encompassing CI, (T)MSCP, and SI (SDI/STI)  in
those terms.  And when I was involved with DSA architecture some years
later, Massbus was used as a reference for many conversations.  By then,
we were talking about 3rd party IO commands and other support for
host-based and cross-HSC shadowing.  But attention, on-line, dual port
issues and many others were mapped between the the two for discussion
purposes, and ensuring that MSCP had at least equivalent capabilities to
Massbus was a litmus test.

The primary TOPS-10 architects (who were present through the early
evolution - after Bob's project & through the HSC) directly told me that
that Massbus was their reference model for DSA.  Not surprising, since
Massbus, with all its warts, was the closest thing to a high
availability, high performance bus that was available to them. 

And since TOPS-10 SMP was the company's first serious/successful effort
at a large scale, highly available, high performance, serviceable
system, it certainly made sense for DSA architects to refer to it for
requirements, successes, and pitfalls.  Of course I did when I made my
(small) contributions to the (T)MSCP architecture process.

However the similarities came to pass, I found viewing DSA as an evolved
Massbus to be a useful model, with a lot of support for that perspective
in the specifications.  MSCP contains the high-level protocol of Massbus
drivers (and much more) through the drive control logic/formatter.  SI
replaces the DCL/formatter to drive "bus" of Massbus -- SI is serial,
ruggedized and capable of quite long runs.  But it carries much the same
low level drive commands.  (Note that there's a long history of
serializing parallel buses as technology evolves, e.g. PCI -> PCIe ->
CSI, a.k.a. quickPath). The host ports (UQSSP,KLIPA,etc) replace the
registers and DMA channels.  Command and function names from Massbus
spec & drivers often appear in the MSCP spec versions that I used...

DSA was ahead of its time in considering storage as a network
architecture - preceding today's NAS, storage appliances, RDMA & iSCSI. 
It was successful in maintaining host compatibility as technologies were
replaced/elided.  E.g. the devolution from SI to parallel drive
interconnect in integrated controllers; then DSSI and SCSI; transports
from CI to NI, disaster-tolerant clusters over fiber; controller-based
shadowing & RAID, variable density (banded) geometries, and ...  About
the only things that changed in the class drivers were device names. 
And even the port interfaces were remarkable stable.  Of course the
error log interpreters had more significant updates...

Also, dusting off memories, enumeration does have a little more support
in MSCP than I previously indicated; there's a "get next unit" modifier
bit for the "get unit status" command that helps.  But it's still
painfully inefficient for the reasons given.

Some form of block transfer (e.g. give me all known units), such as a
bitmap would have been much better.  64K units as a bitmap would fit in
a 16-block (512 byte block) transfer, which is hardly unreasonable. 
Being sparse, it would not have required much controller memory.  But as
this was settled before I arrived on-scene, it's academic.

Johnny's sub-unit confusion probably derives from SCSI LUNs; e.g. the
disks o

Re: [Simh] Limits on MSCP controllers

2019-06-23 Thread Timothe Litt

On 23-Jun-19 20:01, Johnny Billquist wrote:
>
> Timothe Litt clearly knows this better than I do. His mention of the
> attention message jogged my memory a bit. There are just notifications
> going the other way about available disks. And ports are not really
> relevant. In RSX, they are then obviously matched against what you
> configured in your SYSGEN. Disks which do not match any configured
> device are just going to be ignored. No dynamic adding of disks
> possible. And unit numbers can be totally unrelated between one
> controller and the next, and have nothing to do with the device
> numbering inside RSX. So DU5: can map to unit #42 on the third
> controller. RSX will happily do that mapping.
>
MSCP is basically an evolution of the Massbus protocol, on a different
PHY, and informed by the TOPS-10/20 experiences with high-end  (large,
redundant, on-line serviceable) configurations.  The attention messages
are analogous to the attention summary register bits on a Massbus. 
Although intended to be scalable, the design was driven by the initial
implementations, which were the HSC50 (with the SDI/STI PHY).   Later,
we benchmarked configurations with 100s of disks - which required many
racks and long SDI cables. 

TMSCP design lagged MSCP - with the required allowances for tapes -
again using the Massbus model at the PHY (one port connecting to a
formatter that could have multiple drives.  But a lot of the Massbus
ugliness is abstracted away from the OS.

In VMS (and the TOPS-10/20 CI implementations), attention messages can
materialize units on the fly.  The IO working group that defined (T)MSCP
expected all configuration to work this way.  In retrospect, it would
have been better for the OS if the controller could provide a bitmap of
what units are present on a controller.  But SDI/STI (they're the same
phy) were designed for hotplug and for sequenced power-on.  There is no
"drive present" wire - you have to ask a drive to do something in order
to determine if something is plugged in.  So the controller can't know -
unless it were to poll its ports. 

As Mark pointed out, VMS stand-alone backup does try to enumerate all
the devices - which takes forever and a day.  Besides the time it takes
to spin up a disk to determine its geometry (or time out if a port is
empty), there are a limited number of credits (outstanding command
buffers) available in the controllers.  So  sending ~64K "get unit
status" commands to each controller is a painfully slow process - and
frustrating where there are often only 1-4 units on each controller. 
For scale, consider that a 4-XMI system using just KDM70s could have 23
KDMs to poll; a CI system could have 31 HSCs, and an LAVC - well, lots.

RSX (and IIRC RT) statically configure configure MSCP disks as Johnny
describes.  This is only practical for the relatively small
configurations that they support.  I doubt that the unit attention
messages are completely ignored; though not used for configuration, they
also tell the OS when a disk has spun up (or down) based on an operator
pushing the 'online' button.  This is especially true for removable
media (e.g. RA60), but also applies to RA8x/9x.  And the case of an
operator switching unit plugs on a drive.  (Humans do the strangest
things...)

After the UDA/KDB50s,  U/Q (T)MSCP devices moved away from the SDI/STI
interface - but that's transparent to the OS.  For the low end, there
was no need for the hotplug or distances supported by the SDI/STI
interfaces.

And of course, much later the HSCs supported SCSI disk and tape.  And
the HSD/HSZ controllers.  I wasn't involved in those, so I don't
remember their configuration limits.

> And by the way, if the goal of simh was to try and act just like
> existing hardware, then you should not be allowed to have more than
> one tape drive per tmscp controller, since that was the way of all DEC
> tmscp controllers. 

Be careful about saying "all".   The UQ TMSCP-only controllers bundled
with the TxK[5,7] are one drive/controller.  But that is not the
universe of TMSCP controllers.

The HSC50/60/70 support multiple drives per controller (and per K.STI)
e.g. the TA78/79).  The HSC50 predates the T[UQ]K50.

The KDM70 on the XMI backplane, previously mentioned, supports up to 2
active STI ports, each of which can have a formatter (subcontroller)
with multiple drives.

> But tmscp itself also certainly do not have such a limitation, and in
> this case simh do deviate from what the real hardware did. One TQK50
> (or TUK50) per TK50, and so on...
>
SimH has always started with what real hardware does as its design
principle.  However, it doesn't implement all of the real hardware.
People have software that ran on systems that had more hardware than 
the implemented devices support.  So it does bend configuration rules
from time to time.  This is important in cases where SimH implements
low-end mod

Re: [Simh] Limits on MSCP controllers

2019-06-23 Thread Timothe Litt

On 23-Jun-19 13:30, Bob Supnik wrote:
> The four ports is not arbitrary. SimH simulates actual hardware. 

> DEC never built a backplane MSCP controller with more than four ports.
>
I think that's true for the U/Qbuses.  However, the KDM70 (XMI, last I
knew not emulated by SimH) has eight universal ports - any mix of
SDI/STI (disk/tape) that you can plug in.  Officially, max of 2 STI, 8
SDI.  Note that what plugs into STI is a formatter - each of which might
have 4 drives behind it.   Unit numbers are assigned by the drive - IIRC
12 bits for disk, 4 decimal digits for tape (due to bcd encoding of the
unit select switches).  This is slightly simplified - SSDs have
different rules, and some tapes formatters support less than 4 drives.

Those are KDM70 architectural limits - MSCP might be up to 16 bits, but
I don't have a spec handy.  There might be some flag bits in the field.

(I was somewhat involved in the KDM70 development.)

> If you want to extend the current RQ simulator to include third party
> boards (either SMD-based emulators or SCSI-based emulators), feel free
> to add an appropriate mode switch. I don't know what controller ID
> these third party boards returned, though, nor do I know how VMS
> determined the number of ports per controller.
>
(T)MSCP doesn't deal with ports; it deals with (logical) units.[1]  I
don't remember an efficient mechanism to enumerate the units; IIRC, you
simply wait for an attention message with "Unit Available".   You could
also try a "get unit status" for each possible LUN - but that's a very
large number, especially in a large configuration -- e.g. a CI with lots
of HSC controllers, or even lots of NI workstations with MSCP-served
disks.  STAR had over 200 NI satellites at one point.

As noted, the LUNs are arbitrary - for the larger disks, set by a plug
on the drive.  No requirement for starting at zero, or sequential
numbers.  (Of course, you can't use the same LUN on the same controller
more than once.)

VMS enumerates controllers to assign a letter, and uses the LUN from the
attention message to build the UDB[2] and record the unit number. 
(Allocation classes, used to link dual-ported drives) would be a
prefix).  So, if the 4th controller has a single RA81 with a unit plug
of 251, the device name would be _DUD251:.    If in allocation class 18,
_$18$DUD251:.  Units can come/go/morph if, for example, the unit plug is
removed or swapped, which is why "mount verification" exists.

I don't remember whether VMS will try to set a unit on-line if it hasn't
received a unit available first.  But I believe that it won't try
anything if it hasn't found the controller.  The controllers (except for
the boot device) are found by running sysgen autoconfig all in one of
the DEC-supplied startup scripts.   This is towards the middle of the
process - it takes quite a while in large configs, so the startup
initiates a bunch of other stuff that can overlap it first.  There are
several callouts in the startup scripts - it's important that disk
mounts be after sysgen is run.  sylogicals(-v5).com would be too soon.

> I think it would be better to understand why VMS is waiting to mount
> additional discs. Alternately, just create bigger discs and have fewer
> of them.
>
One thing to check is that the unit available (attention) messages are
sent at the right times. 

The other is where in the startup the mounts are placed.  The
site-specific startup - systartup(_v5).com, or something called by it
should be OK.  [the _v5 suffix was introduced with VMS V5 because things
like device naming changed, it was removed many releases later].

[1] This caused some pain with devices that shared a resource across
units - e.g. the RC25 that shared one motor/spindle between a fixed
platter and a removable cartridge... "Clever" hardware design that
ignored the software architecture...

[2] Unit Data Block - VMS's drive context.  They materialize when a unit
is discovered.

> /Bob
>
>
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] More on DMA to IO page

2018-09-06 Thread Timothe Litt

On 06-Sep-18 16:17, Paul Koning wrote:
>
>> On Sep 6, 2018, at 3:59 PM, Bob Supnik  wrote:
>>
>> ...
>> Personally, I like Tim's solution of a private interface between consenting 
>> devices. That would certainly work for the GT40 and its ROM too.
> But if it's a private interface, that means you couldn't do a KMC emulation 
> which runs KMC microcode that talks to some device you aren't already 
> covering as a "consenting" partner.  And my "disk refresh of a display 
> device" wouldn't work, though that's not likely to be code found in the wild.
That's correct.  I did not implement the KMC microarchitecture; it's a
functional emulation.  In fact, I only implemented the DDCMP variant of
COMIOP-DUP - though there are hooks for SDLC.  The KMC does accept (and
verify) microcode loads - if it's not COMIOP-DUP, it fails.  I also let
the micro PC wander so that host errorlogs and keep-alives are less boring.

This is analogous to the way we emulate CPUs - we don't emulate the
microarchitecture, just the instruction results.  And other smart
peripherals (e.g. MSCP controllers, ethernet) that have microcode.

The KMC+Unibus device pairings were expedient, but not particularly
efficient.  They offload the CPU at the expense of a lot of Unibus
traffic.  ("Pair" isn't quite right - KDP supports multiple DUPs/KMC). 

The DMC/DMR also use the KMC, but with a private bus between the KMC &
the line card - and with ROM instead of RAM microcode store.  The
private bus is what enabled megabaud links.

Before single-chip microprocessors (like the 808x) became available &
cheap, the KMC was our universal uP - besides KDP, KDZ, and DMx, it was
used for the AN22, the DX20 (Massbus->IBM storage), and a DMA controller
for printers whose name escapes me at the moment.  And probably more.

Sometimes I think it would be fun to support random microcodes, but then
I remember how closely most of them are tied to hardware minutiae -- for
example the KMC is fast enough relative to the Unibus that its ucode has
delays for signal settling time.   It has cycle/instruction timing
that's faster than the CPU, which in SimH works (mostly) on
instructions.  So a faithful emulation would require two sets of
"instruction" timing.  And for ucode to work, it has to be pretty
faithful.  You don't want to go there.


> Did any KMC based comms software use the KG11 for its CRC?  Or does the KMC 
> do that internally well enough?
>
Not that I recall.  The KDP = KMC + DUP; the DUP has CRC hardware.  KDZ
didn't require a KG11; the DZ doesn't have CRC hardware, so the KMC or
host must have done it - I don't have that ucode handy.  The KDZ
provided DMA output & basic line input (with break chars and flow
control).  I didn't find the input useful for TTYs - I looked into it
for the KS.  DECnet has KDZ support for DDCMP.



___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] DMA access to the IO page

2018-09-06 Thread Timothe Litt

On 06-Sep-18 15:04, Mark Pizzolato wrote:
> On Thursday, September 6, 2018 at 11:33 AM, Timothe Litt wrote:
>>> On 05-Sep-18 22:13, Bob Supnik wrote:
>>> Apparently, the GT40 does this. So... problems. 
>> The KDP (KMC/DUP) and KDZ (KMC/DZ) also do this.   Once the KMC11 
>> was available, this became a fairly popular method to off-load character 
>> processing & turn character interrupts into DMA.  When I did the KDP 
>> emulation, Mark & I negotiated a private API rather than emulate NPR 
>> to the DUP.  See pdp11_kmc.c.  (In the hardware, the KMC ucode runs 
>> the dup (or dz) with interrupts disabled & polls the DUP/DZ CSRs often 
>> enough to catch individual characters.  It then DMAs validated messages 
>> into memory.  The polling would have been pretty expensive to emulate.)
> Along the way, while doing this, you had added the ability to access the 
> I/O page via DMA.
Yes.  It's a hybrid.

The private interface handles DDCMP message transport, avoiding the
character overhead.  You do the framing, CRC & deliver (or accept)
messages.  The KMC does the DMA. 

However, the DUP CSRs are accessed by DMA for resetting/configuring the
DUP, modem control, loopback, and also to detect (via NXM) cases where
the OS tries to activate a non-existent DUP.  These are infrequent, and
didn't merit a private interface.


> I'm assuming the GT40 either generates 18b addresses via Address
>> Extension bits or does the 16b -> 18b conversion itself, including 
>> the IO page test. The "Unibus" does NOT do the IO page 
>> recognition/sign extension to 18b. In all Unibus systems, that's 
>> done in the CPU. 
> That is the core problem that raised this discussion and illuminated
> the potential problems with the initial simulator DMA access to the 
> I/O page.  
>
> The case we were seeing only had a 16 bit addresses presented for a
> DMA memory reference.  The lack of high bits 16 and 17 was either
> an implementation problem in the VT11 simulation or a bug in the
> program we're running which never programmed the high bus 
> address bits.  I'm leaving it to Lars to explore which of these is the 
> cause.  The code that is running was not captured directly from ROMs
> but was typed in from a listing in a manual.  That precise code may 
> not have ever actually run from a ROM in the I/O page...
Probably the former.  The 11/05 doesn't have a MMU and uses 16 bit
addresses.  The bus master (CPU or device) must set bits 16 & 17 for any
access to the I/O page.  This is required for any Unibus peripheral to
work. 

This isn't a function of where the code resides.  There are two things
going on.  The CPU has to fetch code from I/O space - this isn't DMA,
but it does require the CPU to set the upper PA bits.  The display
processor does DMA to get graphics data - this is DMA.  If the ROM code
tells the display processor to fetch from ROM, that's DMA to I/O space. 
In that case, the display processor is responsible for setting the upper
PA bits.



> - Mark

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] PDP10: KS10 console lights?

2018-09-06 Thread Timothe Litt

On 06-Sep-18 06:59, Lars Brinkhoff wrote:
> Hello,
>
> There is a USB device called the Panda Display.  Its purpose is to allow
> PDP-10 simulators to display internal state using LEDs, much like a
> front panel.  I have patched Richard Cornwell's KA10 simulator to make
> the DATA PI, instruction display on the LEDs.
>
> I have checked the KS10 instruction set.  It doesn't have a DATAO PI,
> instruction.  However that particular opcode isn't in used, so it's just
> a UUO.  This is also the case for ITS microcode.  Before I start working
> on anything, I'd like to ask if it would be OK to repurpose this opcode
> for its traditional use?
>
> https://gitlab.com/DavidGriffith/panda-display
>
Neither the KL nor the KS use DATAO PI,.  I don't have a problem with
implementing it for the KS.  It would need to act like any IO
instruction - MUUO in user mode, functional in Exec and with
User-In-Out.  It's also useless unless there is a place to display the
lights - so unless the display hardware is present, you might as well
let it MUUO.

The LIGHTS UUO (calli -1) was disabled in TOPS-10 when support for the
KI was dropped.  Early KS monitors compiled it as a NOP.  So to be
useful, you need a (trivial) monitor patch.  EDDT might need a tweak to
properly decode it.

Note that the corresponding DATAO PTR, (set switches, KI only) overlaps
TIOE on the KS, and thus could not be implemented.


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] DMA access to the IO page

2018-09-06 Thread Timothe Litt

On 05-Sep-18 22:13, Bob Supnik wrote:
> Apparently, the GT40 does this. So... problems.
The KDP (KMC/DUP) and KDZ (KMC/DZ) also do this.   Once the KMC11 was
available, this became a fairly popular method to off-load character
processing & turn character interrupts into DMA.  When I did the KDP
emulation, Mark & I negotiated a private API rather than emulate NPR to
the DUP.  See pdp11_kmc.c.  (In the hardware, the KMC ucode runs the dup
(or dz) with interrupts disabled & polls the DUP/DZ CSRs often enough to
catch individual characters.  It then DMAs validated messages into
memory.  The polling would have been pretty expensive to emulate.)

I can't speak to whether any Qbus device does DMA to the IO page -
off-hand, I can't think of a DEC device that would have done that. 

It's certainly true that if CPU internal registers were accessible to
DMA writes, bad things could happen.  However, it may not be necessary
to fence them off in the emulator.  In the hardware, I'd expect such
a device to get a NXM (bus timeout).  Or maybe the write is ignored.  So
unless a device exists that expects that behavior, which seems doubtful,
the issue can probably be ignored.  Of course, having said that,
there'll probably be some diagnostic that tests NPR timeouts that way :-( 

(Even more evil would be that some early (unibus) 11's had the GPRs in
I/O space -- I think an implementation artifact, but it was handy to be
able to read them directly off the switches for debugging.)


>
> 1. The original simulator I wrote didn't support DMA to IO space. Code
> for this was added in V4, but the code conformed to the internal
> simulator convention that all addresses are 22b wide. This is
> certainly not the case for Unibus DMA devices; those addresses are 18b
> wide. So the V4 code to test for the IO page needs to be bus-type
> sensitive.
>
> I'm assuming the GT40 either generates 18b addresses via Address
> Extension bits or does the 16b -> 18b conversion itself, including the
> IO page test. The "Unibus" does NOT do the IO page recognition/sign
> extension to 18b. In all Unibus systems, that's done in the CPU.
>
> On Qbus systems, IO page references are distinguished by the assertion
> of BBS7, and only address bits <12:0> matter. Either the CPU or a DMA
> device can assert BBS7, but I don't think the standard Qbus chips ever
> assert BBS7, so I don't think standard Qbus DMA devices can access the
> IO page. I could be wrong on this.
>
> 2. More critically, while all IO space addresses are accessible from
> the CPU, not all are accessible from DMA. In particular, internal CPU
> registers are not, at least on the systems I'm familiar with. (And I
> think Unibus map registers aren't either.) CPUs, in general, didn't
> need to monitor DMA activity, except for systems with internal caches,
> like the 11/70. I know for a fact that the F11 and J11 simply ignored
> DMA activity. The cache, if any, was external to the CPU, and it was
> the responsibility of the cache controller to deal with DMA activity.
>
> At the moment, the PDP11 simulator makes no distinction between IO
> page addresses that are CPU-internal vs bus-external. Without this,
> DMA devices can do truly evil things, like overwrite the PSW or memory
> management registers, that they couldn't do on a real system. So a
> data structure needs to be added to distinguish internal from external
> IO space addresses, code needs to be added to distinguish internal DIB
> entries from external, and call flags added to the IO page read/write
> routines to distinguish CPU access from DMA access.
>
> /Bob
>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Boot ROMs (was Simulating a GT40)

2018-09-06 Thread Timothe Litt

On 05-Sep-18 06:16, Mark Pizzolato wrote:
> And it would bring us closer to being able to handle PDP-11 host and
>> network boot.
> A ROM could be used to network boot a PDP11 just as it could be used to
> boot from a disk.  The network (XQ) and all disks are already bootable in 
> the PDP11 simulator with the per device boot mechanisms.
You're describing client-initiated boot - e.g. sim>b XQ.

I mean host (network)-initiated boot (and dump), initiated by receipt of
an ENTER MOP message with the correct password.  As in the DMC/DMR
remote boot - see section 3.5.4.2 and the flow chart on p 3-32 of the
DMR11 UG, appendix E of the DMR technical manual, and the M9301/M9312
technical manuals.

The DEUNA/DEQNA use the same mechanism to support remote and power-up
boot, although they also contain additional ROM. 

This is not currently supported.  The non-network (host only) case is
similar.

For -10 comm front ends, a bit in the -10 interface (DL10 or DTE20)
causes the M93xx (or other) ROM to assert AC LO on the Unibus, allowing
the host to gain control of the -11 for load or dump.

Any emulation of these (and there's been recent discussion of it) would
need an equivalent mechanism.

If a ROM device emulation provided an API for this feature, that cause
would be advanced. 

There are two parts to making it work:

Adding an API in the CPU of the form assert_powerfail( vector) - where
the default is the usual 24/26, but a ROM can specify an alternate
(usually its base address + 24/26).    This is common to all initiators.

Getting the ROM, host interface, or network device to call it (or expose
a suitable API) to gain control of the CPU.  This varies by device.

For the -11, the existing boot snippets could be migrated to set the
switches & use the "real" ROMs, though as you point out, this is not
necessary.  Originally, it seemed simpler to extract (or recreate) small
fragments from the boot ROMs than to find emulate all the ROM variants. 
But SimH & its community has grown, and with current demands, moving to
a more faithful emulation would be appropriate.  There's no rush - it
can evolve.  But if the GT40 (and somewhere on my list, ANF-10 network,
plus the attempts at KA/KI/KL) emulation provide the mechanism, in the
long run it would be a better emulation.

For the KS10, the hardware works differently - and calling
assert_powerfail() would be an error that traps to the simh> prompt.




___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Simulating a GT40

2018-09-04 Thread Timothe Litt

On 04-Sep-18 15:09, Paul Koning wrote:
>
>> On Sep 4, 2018, at 3:06 PM, Lars Brinkhoff  wrote:
>>
>> Timothe Litt wrote:
>>> If a ROM device is added to the -11, I suggest that:
>>> a) It be capable of multiple units
>>> b) each unit with a start address (in I/O space) & length
>> I was going to do this, but I quickly learned that a SIMH unit can't
>> have a base address of its own.  All units share the address region
>> defined by the governing device.
> Another way to handle ROM would be to have a switch on the LOAD command that 
> tells it to permit binaries that have load addresses in I/O space.  Then your 
> current ROM devices config would be "whatever is loaded by the load commands 
> I have issued".
>
>   paul
>
That's an interesting approach, but it may or may not provide the
correct length.  And it doesn't handle power-on boot/dip switches.

A unit doesn't have to be a UNIT; one can clone a master DEVICE to get
multiple address ranges.  It's been a while, but I'm pretty sure I did
that for some device.



___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Simulating a GT40

2018-09-04 Thread Timothe Litt

On 04-Sep-18 15:06, Lars Brinkhoff wrote:
> Timothe Litt wrote:

>> d) the existing gt40 hack that Mark described be migrated to use the ROM
> The hack never existed.  It was suggested but didn't work.
>
Mark wrote:

> As soon as you've got the ROM image into a form that the LOAD command 
> will load as you need it to, please send it to me and I'll build it into the 
> simulator and let be used via:
>
>  sim> set vt enable
>  sim> boot -40 vt
>
> since:
>  sim> boot vt
> already runs the Lunar Lander demo
>

That's what I think should be replaced with a loadable file -- I guess
not ROM - Mark's suggestion of merging the two confused me.  Building
demo/rom code into simh seems a bad idea for a lot of reasons.


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Simulating a GT40

2018-09-04 Thread Timothe Litt

On 04-Sep-18 14:14, Lars Brinkhoff wrote:
> Phil Budne wrote:
>> I seem to recall VAX boot ROMs handled by doing "load -r kaxxx.bin"
>> How are VAX boot ROMs done?  At _some_ point I thought it was done
>> with "load -r".  Is that not available in the PDP-11 simulation?
> I randomly checked one of the VAX models.  It seems memory is modeled
> with at least two arrays: M[] for RAM, and a separate rom[].  Digging
> further, I see there's also NVRAM and some other memory regions.  The
> PDP-11 doesn't have this.
>
The Unibus/QBus can have multiple ROMs.  Besides the various M9301
variants (on the Unibus), the 11/34 ASCII console can be loaded as can
the M9312, M792 & M873 variants, MRV11, etc.  And the corresponding QBus
console & boot ROMs.  There are dozens (literally) of ROM variants for
different combinations of devices & consoles.  Not just from the -11
world - LCG created quite a few custom ROMs for PDP-11 based ANF-10
nodes & communications front ends.

If a ROM device is added to the -11, I suggest that:

a) It be capable of multiple units
b) each unit with a start address (in I/O space) & length
c) the unit accept "attach " to provide the code/data
d) the existing gt40 hack that Mark described be migrated to use the ROM
e) preferably, provision be made for the other functions of a ROM
module, mentioned below.

Some ROM modules respond to power failure by forcing the trap to 24/26
to use the ROM's address (e.g. power-on boot).  These usually provide a
pin that allows an external switch to force a bootstrap - this is used
by the console ROMs and also by the KL10(DTE)/DL10 to allow the host to
control an -11.  (It's also used by DMC/DMR11s, but in a slightly
different way).  There's a M9312 tech manual on bitsavers...  There's
also a commented listing of that ROM -
http://www.bitsavers.org/pdf/dec/unibus/K-SP-M9312-0-5_Aug78.pdf 
Bitsavers also has an M9301 tech manual.  And some M9301 ROM dumps
turned up at http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/PDP-11_Stuff.html

Attach will accept switches, so you can provide loaders for straight
binary,  ASCII (e.g. S-record or Intel Hex), or whatever else comes to
mind.  I'd start with straight binary (byte 0 of the file maps to ROM
address +0).

With this architecture, adding a boot (or device) ROM becomes as simple
as distributing the ROM image.  And SimH doesn't have to compile-in
every ROM.  And it would bring us closer to being able to handle PDP-11
host and network boot.


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Simulating a GT40

2018-09-04 Thread Timothe Litt
On 04-Sep-18 10:25, Al Kossow wrote:On 9/4/18 6:53 AM, Clem Cole wrote:
>> minor nit/detail ...
>>
>> On Mon, Sep 3, 2018 at 2:05 PM Timothe Litt > <mailto:l...@ieee.org>> wrote:
>>
>>>     Once our CAD group moved off the -10s, the next step was Sun
>>>     workstations for schematic capture (VALID).
>>
>> Valid was not Sun Micro Systems.
>
Yes.  I should have pointed out the ambiguity.
> Eventually Valid switched to Sun.
>
Also true. 
> They started out with their own 68010 multibus hardware
> There was also a tiny 'ScaldStation' Corvus built that
> evolved from the Corvus Concept
>
> Apple switched from Daisy to Valid after the days of Valid's
> proprietary hardware.

SCALD (Valid's software) was also ported to VAXstations (VMS with VWS)
by the late 80s.  It drove development of the VWS emulator for DECwindows.

And before more nit's are mentioned: SCALD was developed under
government contract, so it became the basis of other companies, not just
Valid.  But the principals behind it formed Valid - as I often say, IP
is people, not source code.  And while I consider it a schematic capture
tool, in reality it's a toolchain that runs from GED (the graphics front
end) through a compiler and packager before it becomes a wirelist that
can be fed to back-end tools.

DEC was one of Valid's first customers - I think the biggest - and had a
lot of input into the design.  SUDS was another big influence.  The
innovation in SCALD was hierarchical design.

But we're way off topic here, so I'm going to stop.

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Simulating a GT40

2018-09-03 Thread Timothe Litt
On 03-Sep-18 11:52, Mark Pizzolato wrote:
>
> Interesting...
>
> I'm a little confused though.  I certainly understand using the GT40 
> as a standalone system to run Lunar Lander.   That's great fun.
>
> Meanwhile, if the GT40 could be some sort of terminal to some other 
> system, how was it connected to that other system?  Serial line via the 
> DL device?
>
> Also, how/where was the LK40 keyboard connected to the system?
> Through the DL device?  Was there more than one DL device?
>
> - Mark
>

The GT40 stand-alone is a toy.  Lunar lander was just a demo.

In real life, the GT40 was used as a(n expensive) graphics terminal
(~$10-15K IIRC), most often for the -10.  CAD systems, such as SUDS and
(e.g. circuit) simulations used it.  You could add arbitrary -11
peripherals, but that wasn't often done - not cost effective.  The host
provides the disks & application computes - the GT40, which is a vector
display + lightpen & keyboard offloads the display.  The GT40 pushes
display list execution to a DMA processor, which is similar to the VB10C
(VR30, & type 340), it executes display lists.  (See the DIS device in
the TOPS-10 Monitor calls manual for details of those devices.)  The
GT40's -11 could provide an additional level of abstraction between the
host & display processor.  Interpolation; step & repeat; managing the
light pen interactions (e.g. drag, draw line, etc).

In addition the GT40 could be located remotely from the host - not quite
office environment, but less of a computer room than the -10.  And while
expensive by today's standards, moving the overhead off the -10 was
worthwhile.  I believe the CAD group had clusters of them talking to the
-10.

It had a long life - about 1972, serviced until 1995.  If you needed the
capability, you could afford it.  But you didn't buy one casually.

For most purposes, the GT40 was superseded by devices like the VT105
(VT100 + b/w graphics), VT125, GiGi, & VT240.  But those are all raster
scan devices - which can't match the quality of a vector display.  And
none of them had a lightpen.

Brochure scans:
https://web.archive.org/web/20060509012428/http://d116.com/dec/gt/GT40-3.jpg
https://web.archive.org/web/20060509012428/http://d116.com/dec/gt/GT40-4.jpg
https://web.archive.org/web/20060509012428/http://d116.com/dec/gt/GT40-5.jpg

More technical info:

http://www.bitsavers.org/www.computer.museum.uq.edu.au/pdf/DEC-11-HGTGA-B-D%20GT40-GT42%20User's%20Guide.pdf

The keyboard uses a ~110 bps 20ma current loop interface.  It connects
to the standard console interface to the 11/05(11/10).  I don't recall
what was done with the output side - but I suspect it was brought out to
the usual molex connector so that standard -11 diagnostics could be run.



___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

[Simh] Of CD-ROMs

2018-09-03 Thread Timothe Litt
As is often the case, terminology seems to have begotten confusion. 
CD-ROM media have a  (usable) sector size of 2048 bytes + ECC/overhead. 
SCSI allows exposing this as variable logical block sizes.  Device
drivers may present the SCSI LB size to their clients - or they may
reblock the data into larger or smaller units.  CD drives generally use
the SCSI command set - though it may be transported over other buses -
from from ATAPI to USB.

For detailed information on CD-ROM operation, see the SCSI2 spec
(X3.131), specifically chapter 14.  A free copy is available at
http://www.staff.uni-mainz.de/tacke/scsi/SCSI2.html ( chapter 14:
http://www.staff.uni-mainz.de/tacke/scsi/SCSI2-14.html  You'll find
references to other chapters, which are necessary for a full
understanding.)  Or SCSI3 (X3.304-1997 in particular, though it
references another 17 documents).

Here are the highlights -- despite some simplifications/omissions I'm
afraid it's still rather long.

The terminology used for CD media differs from that used for magnetic
disks and is at best internally semi-consistent.  CD-ROM format is a
hybrid of audio and data storage - the latter having been grafted on top
of the former.  And a given disk may be pure audio, pure data, or a
mixture.  This note is directed toward pure data disks - mostly.

>
> CD-ROM is a unique SCSI device in the respect that some logical blocks
> on a disc may not be accessible by all commands. SEEK commands may be
> issued to any logical block address within the reported capacity of
> the disc. READ commands can NOT be issued to logical blocks that occur
> in some transition areas, or to logical blocks within an audio track.
> PLAY commands can NOT be issued to logical blocks within a data track.

("Track" isn't what you think of on a traditional hard disk - the set of
sectors on a single surface at a given head position.  It's more like
"partition" - or per its heritage, the "song" or "piece", or "track" of
an LP record.)

(CD-ROMs do not have geometry in the sense of traditional disk drives;
in this respect, they are more like MSCP devices.  OS drivers that
report geometry fabricate sectors/track, tracks/cyl, and cyl/volume -
there is no standard (or perfect) way to do this.  Results vary -
generally the reported medium size is more accurate than multiplying the
"geometry".  The OS driver may also lie about bytes/sector.  Ordinary
user mode programs live happily in these alternate realities, but they
create challenges for emulation.)

> The physical format defined by the CD-ROM media standards provides
> 2352 bytes per sector. For usual computer data applications, 2048
> bytes are used for user data, 12 bytes for a synchronization field, 4
> bytes for a sector address tag field and 288 bytes - the auxiliary
> field - for L-EC (CD-ROM data mode 1). In less critical applications,
> the auxiliary field may also be used for user data (CD-ROM data mode
> 2). A CD-ROM physical sector size is 2048, 2336 or 2340 bytes per
> sector. These values correspond to user data field only, user data
> plus auxiliary data, the 4 byte address tag plus user data plus
> auxiliary data.

(Mode 2 is intended for applications where a few error bits matter less
than capacity - e.g. audio or video streams.  I'm not aware of any OS
that uses it for data, though reading in Mode 2 may provide access to
the ECC bits.  I believe some "copy protection" schemes [used by games,
encyclopedias, etc]  wrote intentionally bad ECC bits so that only
proprietary software could [easily] read the disks; the OS would report
uncorrectable errors if read with its driver.)

> A CD-ROM small frame consists of: a) 1 synchronization pattern (24+3
> bits) b) 1 byte of sub-channel data (14+3 bits) c) 24 bytes of data
> (24 x (14+3) bits) d) 8 bytes of CIRC code (8 x (14+3) bits) Total:
> 588 bits.
>
> For data: the data bytes of 98 small frames comprise the physical unit
> of data referred to as a sector. (98 small frames times 24 bytes per
> small frame equal 2 352 bytes of data per sector.)
>
> A sector that contains CD-ROM data mode one data has the following format:
>
> a) 12 bytes Synchronization field b) 4 bytes CD-ROM data header
> Absolute M field in bcd format Absolute S field in bcd format Absolute
> F field in bcd format CD-ROM data mode field c) 2048 bytes User data
> field d) 4 bytes Error detection code e) 8 bytes Zero f) 276 bytes
> Layered error correction code
>
> A sector that contains CD-ROM Data Mode two data has the following format:
>
> a) 12 bytes Synchronization field b) 4 bytes CD-ROM data header
> Absolute M field in bcd format Absolute S field in bcd format Absolute
> F field in bcd format CD-ROM data mode field c) 2 336 bytes User data
> field (2048 bytes of mode 1 data plus 288 bytes of auxiliary data)
>
(This is what is physically recorded on the medium.  It's not
necessarily what the driver or user sees.)

> Logical addressing of CD-ROM information may use any logical block
> length. When the 

Re: [Simh] SCSI-Interface for simh-vax?

2018-09-01 Thread Timothe Litt
> CD's are normally 1024 byte blocks
2048 Bytes/sector is the ISO std for CD-ROMs (Mode 1).  Mode 2 omits ECC
for2336 B/sector - but I don't know of a case where someone was crazy
enough to use it for data.

> we might have done at DEC was mess with the block size on a CD

DEC does not modify the physical sector format - it is implemented in
the drive.

VMS packs four 512 B logical sectors into one 2048 B physical sector;
the driver handles buffering and provides the illusion of a 512B sector
size.  Most FILES-11 CDs use unmodified ODS-2.   But distribution CDs
would do things like omit (or truncate) the bit table to save space. 
For that reason, ANA/DISK would fail.  There are some CDs that use a
slightly modified HOM block (FILES-11 B Level 0), but it wasn't widely
adopted.

There are other oddities - drives & drivers tell different lies about
the geometry (cyl/track/sector) of a CDROM; multiplying these out
frequently will not match the file system's idea of the volume size. 
(As recorded in the SCB for FILES-11, equivalent for other formats.) 
The lies vary by OS, version, drive & phase of the moon.  The same CD
read under different conditions will report alternative facts.  These
will not trip up a DEC OS on DEC HW - but can create obscure issues with
simulation - especially if you try to pass geometry  from a physical
drive thru SimH.

Writing a CD is rarely supported by a standard driver - typically, CD
writing software issues direct SCSI commands to the drive (encapsulated
in whatever the real transport is).  This may be by direct IO, or via a
class driver.  It can be somewhat tricky - note that most drives can not
tolerate buffer underruns when writing.

> I wasn’t able to figure out how to make it work in RSTS/E.

To be bootable, a CD needs an appropriate boot block (LBN 0).  For VMS,
it's written by 'writeboot' - not initialize.  I don't remember the
details for RSTS - look at SAVRES->RESTORE and BACKUP for
possibilities.    Or wait for Paul K to fill that in.

Also note that dual format CDROMs are possible - 9660 reserves the first
16 sectors for this; thus it's possible to write a disk that is readable
as both FILES-11 and & 9660 (with file data being in the same sectors;
only the metadata differs.)  Such disks were actually created.

On 01-Sep-18 15:28, Clem Cole wrote:
> below...
>
> On Sat, Sep 1, 2018 at 2:39 PM Zane Healy  > wrote:
>
> Create a virtual disk in SIMH the size of the CD-R blank.  Prep
> the disk, then burn it to CD-R.  This is how I created my bootable
> CD’s for RT-11 and RSX-11M+.  I’ve then used those CD’s to do
> installs on my PDP-11/73.  I wasn’t able to figure out how to make
> it work in RSTS/E.  I could create the CD-R, but not boot and
> install RSTS/E from it.
>
>
> Just curious ... aren't there funnies because CD's are normally 1024
> byte blocks and disks are usually 512?    IIRC, there are places that
> store numbers of blocks (not bytes), and you have to be careful.    I
> have >>not<< played in any of that in years.   
>
> IIRC one of things  we might have done at DEC was mess with the block
> size on a CD -- that's a Tim Litt type question.   Those bits are so
> long ago depreciated/garbage collected in my acitve brain cells. ;-)
> ᐧ
>

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] VAX emulation issues on Raspberry Pi

2018-07-31 Thread Timothe Litt
On 31-Jul-18 13:46, Paul Koning wrote:
> No, but thanks for that pointer. The paper I was thinking about is
> listed in the references:
> Xi Wang, Haogang Chen, Alvin Cheung, Zhihao Jia, Nickolai Zeldovich, and M. 
> Frans Kaashoek. 2012a. Undefined behavior: What happened to my code? In 
> Proceedings of the 3rd Asia-Pacific Workshop on Systems.
https://people.csail.mit.edu/nickolai/papers/wang-undef-2012-08-21.pdf

Seems to be a rehash of some well-known stuff - that was pretty well
known even in 2012.

In additions to -Wall, I also used to compile SimH with -pedantic and -Werror.  
And usually one of the -std flags.

> Hm, -pedantic is specificially intended to be "the warnings that aren't 
> really useful for general use and real code".
But I often found that if I tripped one, my mind was wandering and
something else was likely not quite right :-)
>> Another big potential gotcha in SimH is the heavy use of casts, and type 
>> aliasing rules (the latter especially problematic in subroutine calls).
> Yes, aliasing is a big problem area.  If you get any warnings about strict 
> aliasing, fixing them (via unions) is a good idea.  Alternatively, 
> fno-strict-aliasing can be used but that comes with a performance cost and is 
> not the best answer.
>
I'm of the school that says "fix, don't disable".  Especially if you
want your code to work with more than one compiler.  But it can be
painful with legacy code.

I also recommend "C is not a low-level language
" in the July issue of
CACM.  It makes some useful points that apply here.



___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] VAX emulation issues on Raspberry Pi

2018-07-31 Thread Timothe Litt
On 31-Jul-18 12:41, Paul Koning wrote:
>
>
> One thing that happens with newer compilers is that they take more advantage 
> of opportunities offered by the letter of the standard.  If you do something 
> that is "undefined", the compiler can do with that whatever it wants to.  If 
> you have code that appears to allow something undefined under certain 
> conditions, the compiler is free to assume that the undefined thing cannot 
> happen and therefore that scenario doesn't occur.
>
> For example: 
>   extern int foo[100];
>
>   if (i < 0)
>   a += foo[i];
>
> The compiler is allowed to delete that code because subscripting an array 
> with a negative number is undefined.  And modern compilers often will.
>
> There is an excellent paper, I think from MIT, about many undefined things in 
> C that are not all that well understood by many programmers.  Unfortunately 
> I've misplaced the reference.  It mentions that Linux turns off a whole pile 
> of gcc optimizations by specific magic flags to avoid breaking things due to 
> undefined things it does in the existing code base.
>
> For integer arithmetic, what you mentioned is a key point.  Unsigned is 
> defined to have wraparound semantics.  With signed integers, overflow is 
> "undefined".  So, for example, if you want to emulate the PDP11 arithmetic 
> operations, you have to use unsigned short (uint16_t).  Using signed short 
> (int16_t) is incorrect because of the overflow rules.
>
> More generally, in SIMH you probably want the rule that every integer 
> variable is unsigned.  Or at least, make every variable unsigned unless you 
> know there is a good reason why it needs to be signed, and no undefined 
> behavior is possible for that particular variable.
>
> If you compile (in gcc) with -Wall and get no warnings, that's a pretty good 
> sign.  If you do get warnings, they almost certainly need to be fixed rather 
> than disabled.
>

You probably mean
https://people.csail.mit.edu/nickolai/papers/wang-stack-tocs.pdf

For more reading, see, e.g. http://port70.net/~nsz/c/c99/n1256.html#J.2
,
https://cacm.acm.org/magazines/2016/3/198849-a-differential-approach-to-undefined-behavior-detection/fulltext,
https://blog.regehr.org/archives/1520,
https://runtimeverification.com/blog/undefined-behavior-review-and-rv-match/


The last is a competitive analysis - but quite informative nonetheless.

In additions to -Wall, I also used to compile SimH with -pedantic and
-Werror.  And usually one of the -std flags.

Another big potential gotcha in SimH is the heavy use of casts, and type
aliasing rules (the latter especially problematic in subroutine calls).



___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] VAX emulation issues on Raspberry Pi

2018-07-31 Thread Timothe Litt

On 31-Jul-18 10:08, Paul Koning wrote:
>
>> On Jul 31, 2018, at 9:33 AM, Robert Armstrong  wrote:
>>
>> FWIW, this may also say something about the quality of the code
>> generation in gcc for ARM vs x86 processors, or it may even say
>> something about the relative efficiency of those two architectures. 
> One thing worth doing is to use the latest gcc.  Code generation keeps 
> improving, and it's likely that architectures such as ARM see significant 
> benefits in newer releases.
>
>   

Bob's SimH shows:

Compiler: GCC 6.3.0 20170516

The latest gcc is 8.1.0.

It's a bit of a pain to build (I did it recently), but less so than in
years past.

On a Pi, it will take quite a while.  Be sure to use the latest
dependencies - gmp, mpc, mpfr.  And put in some place like /usr/local -
you don't want to replace the system compiler.

Read http://gcc.gnu.org/wiki/InstallingGCC &
https://gcc.gnu.org/install/index.html.

I haven't played with isl - it may help, or it may expose new issues.

You may be better off building a cross-compiler & building on a fast x86
or other platform if you have one.  Besides computes, I/O on the PI can
be pretty slow - USB is not a high-performance bus.  Especially if  you
need a bunch of hubs.

You also may need the latest binutils - if not for the build, for using
the compiler.  Somewhere between the version I used previously and the
new one, the object file format was enhanced incompatibly.

ARM is a RISCish architecture, x86 is a very CISC one, burdened with a
lot of backward compatibility.  Under the hood, it uses a lot of clever
optimizations.  Both are moving targets, as are their compilers.  (GCC
isn't the only choice.  Don't forget to try ICC if you want Intel's take
on optimized code for their CPUs.  And Clang is coming along.)  Don't
open the religious war over "relative efficiency"; the only thing that
matters is whether code that you care about has performance that you
deem adequate.

Have fun.

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] pakgen.c - VMS License Key Generator

2018-07-31 Thread Timothe Litt
The code that was posted is inappropriate, and should be removed from
the list archives.

VMS has not been abandoned, and this group is not in the business of
stealing IP.

There is a hobbyist program, still available, for non-commerical
licenses.  Legitimate PAKs can be obtained there.

For commercial licenses, other arrangements are necessary - if you have
licenses for physical hardware, last I knew they could be transferred to
emulators.  Contact HP and/or VMS Software (Inc) for their terms.

Distributing this code (whether or not it works) would tend to undermine
the good will that has supported the hobby and emulation community for
many years.  I'm not an attorney, but it may be illegal under the DCMA
as well.

Speaking for myself (I'm not a list administrator), please don't post
this, or anything like it to this list ever again.

And as someone who made a living from IP - don't post it anywhere else
either.


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] VAX emulation issues on Raspberry Pi

2018-07-31 Thread Timothe Litt
On 31-Jul-18 01:31, Jeremy Begg wrote:

>
> The license wouldn't be an issue if the SET CPU MODEL command worked.
It looks as though it works - but not as you (or I) expect.  It defines
the SimH model - which is mostly I/O devices.

Your issue is with the SYS_TYPE register, which lives at location
0x40004 in I/O space of MicroVAX systems.
This is the ROM - SimH just uses what is loaded.  It's included in the
ROM checksum.

SET CPU MODEL should probably update the register, which has the format:
<31:24>SYS_TYPE <23:16>REV <15:8>SYSDEP <7:0>LicenseID
SYS_TYPE qualifies the SID - the codes are re-used.

I don't have the complete list of SYS_TYPE codes, but a few are:
UVAX:

1 UV2, 4 Uv2000/vs2000, 5 VAXterm, 6 9000 console

CVAX:

1: uvax(35/36/33/34/38/3900), Vaxserver(35/26/33/34/38/3900),
Vaxstation(32/3500)
2: VAX 62x0, 63x0
3: Vaxstation( 3520/40)
4: vaxstation 3100
7: FT5200

Rev is 1-based.

<0> is set for a timesharing system.
<1> is set for a single user system.

It would have to recompute the ROM checksum.

You should probably examine the register on your physical system - I
would expect that NVAX either added codes to the CVAX set, or started
over with a new enum.  I don't remember which.

> I recall some of that discussion.  I think the SSH server is having to do a
> whole lot of math to come up with sufficient entropy and no doubt that's a
> lot of floating point.
It would be odd if it was FP - crypto is generally multi-precision integer.

Entropy doesn't come from math.  It comes from I/O, timers, or these
days, from a hardware noise source.
There is some math involved in whitening - but it's mostly shifts &
xors.  Not FP.
SimH is more predictable than real hardware.

You can probably settle this by enabling CPU history, and randomly
pausing emulation and dumping the buffer while you are waiting.

> It occurred to me that the emulation I'm running is a -3900 series machine
> which if memory serves, did not have an FPU.  Meaning all those VAX floating
> point operations are being emulated twice -- once by the VAX and once by
> SIMH.  Is that correct?  If so I wonder if the emaulation could be tweaked
> to behave as if the emulated machine has an FPU.
The only VAX without an FPU was the early 780, which was ECOd.
There are some emulation options:

H-float is optional, and emulated when absent.

packed-decimal[movp,cmpp3,cmpp4,addp4,addp6,subp4,subp6,cvtlp/pl,cvtpt/tp/ps/sp,sashp,mulp,divp)
is optional, and emulated (mostly by microvaxes).

Later processors (after 86) could (and did) omit 2 groups of instructions:
  MATCHC,MOVTC,MOVTUC,CRC and EDITPC
  ACB[FDGH}, emod[FDGH] and POLY[FDGH]

And, of course the entire vector instruction set (I was responsible for
that emulator).

Although in theory these were included or emulated as a group, some
implementations were selective.  VMS does not rely on model-specific
knowledge: it checks each opocode and loads the emulator if any
instruction is emulated.  Those that are implemented never reach it.

Last I knew, SimH implements all instructions on all models (except
vectors :-().

SimH's design goal is correctness, not particularly speed.  Except for
the -10, all run on a 32-bit platform.

FWIW: there's room for optimization by promoting some internal
representations to 64-bits where available.  But this is a big job, and
would require a lot of validation.  Feel free to volunteer...

I'm not sure where SimH stands on compiler optimizations - you can
always try compiling with the highest optimization level your compiler
supports.  But you may have to report some bugs...


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] VAX emulation issues on Raspberry Pi

2018-07-30 Thread Timothe Litt
Two issues have been discussed before.

The boot failures are being worked by Mark - they're some timing issue
having to do with the fact that SimH is faster than the hardware.  They
seem to be a heisenbug.  He's recently added instrumentation.

The SSH startup isn't compute bound so much as entropy limited.  This
was discussed a while ago, but I don't recall the final outcome.  Check
the archives.

The 4000/96 would be a NVAX or NVAX+.  It would require more license
units than the 3900, so I'm somewhat surprised that you're having
issues.  But LMF has lots of ways to evaluate licenses.  The SID
register reflects the CPU type; SYS_TYPE has the licensing bits.  The
SID determines which VMS CPULOA is loaded - this is what I/O buses are
supported, machine check format - you can't change it without a lot more
grief.  The SYS_TYPE bits are the workstation vs. server.  There's SimH
support for that.

On 30-Jul-18 18:57, Jeremy Begg wrote:
> Hi,
>
> A while ago the power supply in my VAXstation 4000/96 died and rather than
> fix it I decided to move it to a Raspberry Pi 3.
>
> The VAXstation has a 100MHz CPU and the RPi has a 1.2GHz CPU - about 120
> times faster.  Yet the performance of SIMH basically sucks, especially when
> logging in to the emulated VAX via SSH.
>
> On the real VAXstation, establishing an SSH sesison was slow -- it would
> take the better part of a minute -- but once established it was very usable
> and quite capable of running a DECterm to an X11 display on a remote PC over
> an SSH tunnel.
>
> On the Raspberry Pi the SSH session establishment takes several minutes and
> trying to run a DECterm is painful to say the least.  I was hoping that the
> RPi's much faster CPU would compensate for the emulation overhead,
> particularly on a very CPU-intensive task like SSH session establishment, so
> this result is rather disappointing.
>
> I could perhaps put up with those issues but there two other, more
> fundamental problems when starting the simulation.
>
> The first one is, the emulation can't be started automatically; I have
> to run it interactively in a terminal window.  If I try to automate the
> startup using, for example
>
>$ ./vax < vax.ini
>
> the VAX console boot ROM fails a self test and refuses to boot into VMS.
> If I type the commands from vax.ini by hand, it works fine.
>
> A similar issue occurs if I try to load the boot console NVR from a file:
> the VAX console boot ROM fails its self-test and won't boot VMS.
>
> The second problem is that the simulated VAX is *always* a VAXserver 3900. 
> Trying to SET CPU MODEL=MicroVAX just doesn't work, so my VAX-VMS licence
> PAK's availability table code don't suit the machine any more.
>
> The SIMH version is currently 
>
>MicroVAX 3900 simulator V4.0-0 Beta   git commit id: 733ac0d9
>
> I tried downloading the latest from Github (git commit id: 8077d4de) but it
> didn't fix the startup issues so I haven't persisted with it.
>
> Before starting this exercise I had read several reports of people
> successfullly using Raspberry Pi to run an emulated VAX so I have to think
> something is very broken in my RPi environment, but I'm not sure what I
> should be looking for.
>
> FWIW the Raspberry Pi is running
>
> Linux pieric 4.14.52-v7+ #1123 SMP Wed Jun 27 17:35:49 BST 2018 armv7l 
> GNU/Linux
>
> and the file /etc/os-release is:
>
> PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
> NAME="Raspbian GNU/Linux"
> VERSION_ID="9"
> VERSION="9 (stretch)"
> ID=raspbian
> ID_LIKE=debian
> HOME_URL="http://www.raspbian.org/;
> SUPPORT_URL="http://www.raspbian.org/RaspbianForums;
> BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs;
>
> SIMH was built with "gcc (Raspbian 6.3.0-18+rpi1+deb9u1) 6.3.0 20170516".
> Here is the full SHOW VERSION output:
>
> sim> show version
> MicroVAX 3900 simulator V4.0-0 Beta
> Simulator Framework Capabilities:
> 64b data
> 64b addresses
> Threaded Ethernet Packet transports:PCAP:TAP:NAT:UDP
> Idle/Throttling support is available
> Virtual Hard Disk (VHD) support
> RAW disk and CD/DVD ROM support
> Asynchronous I/O support (Lock free asynchronous event queue)
> Asynchronous Clock support
> FrontPanel API Version 5
> Host Platform:
> Compiler: GCC 6.3.0 20170516
> Simulator Compiled as C arch: ARM (Release Build) on Nov  9 2017 at 
> 08:04:00
> Memory Access: Little Endian
> Memory Pointer Size: 32 bits
> Large File (>2GB) support
> SDL Video support: No Video Support
> RegEx support for EXPECT commands
> OS clock resolution: 1ms
> Time taken by msleep(1): 1ms
> OS: Linux pieric 4.14.52-v7+ #1123 SMP Wed Jun 27 17:35:49 BST 2018 
> armv7l GNU/Linux
> git commit id: 733ac0d9
>
> The later version (which I'm not running because it didn't fix the startup 
> issues) is:
>
> sim> show version
> MicroVAX 3900 simulator V4.0-0 Current
> Simulator 

Re: [Simh] CMP R3,(R3)+

2018-07-30 Thread Timothe Litt
On 30-Jul-18 11:45, Johnny Billquist wrote:
> On 2018-07-30 15:51, Timothe Litt wrote:
>> On 30-Jul-18 09:30, Paul Koning wrote:
>>> Yes, that is the standard way to do this.  I have never seen the
>>> code you quoted before and I can't imagine any reason for doing that.
>> A memory address test's verification pass.  Check that  memory
>> contains address of self. Of course, you need a
>>
>>  bne fail
>> following the compare :-)
>
> Not to mention that it will succeed or fail depending on which PDP-11
> model you run the code on? :-)
Oddly enough, we did have quality control, and it usually worked.

Diagnostics are often CPU-specific.  This was fixed after the 11/20.
I might have done this - it represents a 40% savings in instructions for
the loop:
  ;11/20 safe Many other
11s  Some other 11s
    10$: movr3, r0   10$:  cmp r3,(r3)+ 10$: cmp r3,(r3)+
   cmp r0, (r3)+                 bne    fail 
   bne fail
                    bne  fail sob  r1,
10$    dec r1
    dec 
r1
bne 10$
    bne  10$

20-40% reduction in instructions of an inner loop at boot time is worth
a runtime check
for the 11/20 - if I cared (e.g. a BIST is probably in
processor-specific ROM, so no need to check).


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] CMP R3,(R3)+

2018-07-30 Thread Timothe Litt
On 30-Jul-18 09:30, Paul Koning wrote:
> Yes, that is the standard way to do this.  I have never seen the code you 
> quoted before and I can't imagine any reason for doing that.
A memory address test's verification pass.  Check that  memory contains
address of self. Of course, you need a

    bne fail
following the compare :-)

> Either option of course only works if R3 contains a valid memory address, and 
> it must be even. 
I should have noted that "valid memory address" includes "even" for
words.  But if the code provided works on any 11 (obviously, not the
11/20), that constraint is met.
>  A short way to increment by 2 that doesn't depend on R3 being even would be 
> CMPB (R3)+,(R3)+.
>
> It's fairly common to see the TST, not just because it's shorter, but also 
> because it has a well known effect on the C condition code (it clears it).  
> For example, a common pattern when C is used to indicate success/fail in a 
> subroutine:
>
>   TST  (PC)+ ; Indicate success
> fail:   SEC
>   MOV  (SP)+,R1  ; ...
> RTS  PC
>
> You might also see code that pops a no longer needed value from the stack, 
> either clearing or setting C or leaving it alone.  To clear, you'd see TST 
> (SP)+.  To set, COM (SP)+.  To leave it untouched, INC (SP)+.  (More obscure 
> is NEG, which sets C if the operand is non-zero and clears it if it is zero.)
>
The C bit was a very common way of returning success/failure from
subroutines and system services.  In his case, however, the condition
codes were ignored in all paths from the instruction.  It was just a
very odd way of adding 2.

Those constructs bring back memories... particularly of debugging such
clever code that didn't have the corresponding comment.  I often worked
on several machines with slightly different ideas of condition codes;
switching took some effort.  Clever coding is fine - as long as you
document it.

BLISS got pretty good at being clever - but never at commenting its
assembler code.  Some of its contortions caused CPU architects to pause
before agreeing that the code should work.  On a few occasions, SHOULD
and DID diverged...


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] CMP R3,(R3)+

2018-07-29 Thread Timothe Litt

On 29-Jul-18 06:42, Lars Brinkhoff wrote:
> Hello,
>
> I have a very small debugger for the GT40 called URUG, or micro RUG.  It
> has two troublesome instructions: CMP R3,(R3)+ and equivalent with R4.
> I suppose SIMH will run it fine even though there's a hazard?
>
> The PALX assembler complains about this, so I'm considering chaning the
> code.  As far as I can see, the instructions are used to add 2 to a
> register.  It's shorter than an ADD R3,#2, which is important because
> there's not a lot of memory on this machine.
>
> Would there be any possible downside to using TST (R3)+ instead?
>
> The whole file is here:
> https://github.com/PDP-10/its-vault/blob/master/files/sysen2/urug.27
>
I think the 11/20 had a bug and compared C(r3)+2 to @R3; the original
intent was that it would compare C(r3) to @R3; then increment R3.  I
don't recall if it was fixed in later machines.

In any case, the CMP's purpose is to set the condition codes - e.g. it
does a subtract to set the condition codes, comparing the register
contents with memory.  Aside from the autoincrement, it has no other
side effects.

Your code doesn't use the condition codes, so there's no difference
between your cmp and a TST (R3)+.

In either case, the instruction is making a memory reference.  So R3
must not point at NXM (or many I/O devices, which have read side-effects).

As long as R3 points to a valid location in core memory, TST should be fine.

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] TMXR/UC15 documentation?

2018-07-17 Thread Timothe Litt
On 17-Jul-18 16:29, Lars Brinkhoff wrote:

>
> It's mostly used to allow direct access to PDP-11 main memory.
>
> Maybe a longer explanation is in order.
>
> The MIT AI PDP-10 had a special device attached called the Rubin 10-11
> interface.  It allowed connecting up to eight PDP-11s.  The interface
> mapped the PDP-11 memories into the PDP-10 address space.  As far as I
> know (there's not much in the way of documentation), there was only
> shared memory, no interrupts or any other features.  The PDP-10 always
> initiates accesses.  The 11s could not access PDP-10 memory.
>
The DEC version of this is the DL10 - at least for the KA/KI.  Also
supported on the
KLs with external memory/IO buses.  This provides a Unibus to PDP-10 memory
window.  IIRC, up to 4 Unibuses/DL10.  Either side could write -10
memory.  The
DL10 connects to the IO bus (for configuration, interrupt status/enables,
etc, and the electronic finger (which triggers the boot rom - which is
how the 10 gets
control for load/dump.  And the memory bus for the memory references -
which to
the Unibus, look like memory.  We used this for ANF-10 network front
ends; some
environments also connected -15s to the Unibus.

You should think about what you describe as a subset of the DL10, which
someone will
eventually want to add to the KA/KI emulations.

For the KL10, the DTE20 uses a somewhat different architecture - I don't
have time to
describe it now.

I think the manuals for both are on Bitsavers - but I have a meeting to
get to...

It's pretty clear that SimH needs a model/portable library for shared
memory...

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] EXT :Re: Simh Digest, Vol 172, Issue 4

2018-05-08 Thread Timothe Litt
In addition to Dave's comments:

For a MSwin PC, I've switched to VcXsrv (vcxsrv.sourceforge.net) from
Xming.  It's free, and works better - especially the mouse &
interactions with local windows.

If your host is VMS, you may need to SET DISPLAY; a unix, set the
DISPLAY environment variable.  This is often magically done by your SSH
client & server.  You may need to enable X11 forwarding.  (e.g. in
PuTTY, under X11 options; on the host - it depends.  For Unix, see
sshd_config.)

To simplify life, I recommend getting X11 working with your client (X11
server, ssh - or other remote terminal) from a modern OS first.  (I use
Linux.)  XClock is my favorite test.  Once that's sorted out, work on
the Motif machine. 

If the host is Unixy, one frequent gotcha is the MIT Magic Cookie
failing if you use su/sudo.

As Dave said, if you want specific help, provide full environment
information.  OS on each end; X server, how client and server are
connected (protocol, software).  Including versions.

Also, note that x.org has deprecated a number of the X11 extensions that
VMS & Tru64 liked.  So the latest version of the X11 server may not be
the best - versions matter.

Have fun.

On 08-May-18 14:38, Hittner, David T [US] (MS) wrote:
>
> You have asked for a rather generic X how-to without providing much of
> the useful information needed to help instruct you.
>
>  
>
> The most important information is what X (windows/motif) Client you
> will be running the applications on – this is usually the guest OS
> (OpenVMS, Tru64, BSD, etc.) running in the SIMH simulation, and what X
> Server (usually run on the PC that you want to connect to the guest
> OS) that you will be running.
>
>  
>
> From your email we might presume OpenVMS on SIMH VAX, and the native X
> server (Gnome?) on Ubuntu. If you provide more server and client
> information, someone can provide more detailed instructions.
>
>  
>
> Generally:
>
> 1)  Install the X Client software (applications) on the virtual
> system (guest OS). If this is DecWindows/Motif on OpenVMS, read the
> fine installation manual - making sure to follow the AUTOGEN
> instructions to start up the windowing system on boot and adding
> enough reserved memory to prevent detached application crashes. :-)
>
> 2)  Install the X server on the graphical visualization system
> (usually a PC). This is already done if running Linux on a PC in
> graphical mode. On Windows, install your favorite X Server software
> (Pathworks, Hummingbird, Attachmate, PuTTY, MING, etc.)
>
> 3)  On the X Server side, make sure the permissions are set to
> allow the X Client’s  IP (guest OS IP) to display to the X Server.
>
> 4)  Connect from the X Server system to the X Client system via
> Telnet, SSH, or a native X Server connection interface, depending on
> the X Server system’s capabilities.
>
> 5)  On the X Client side, start up the X application (XClock,
> Xterm, etc.) and export the X application session to the X Server
> visualizer (PC). The export and start up method methods vary depending
> on the X Client and guest OS. I would recommend starting with XClock
> or XEyes as your simplest connection tests.
>
>  
>
> Dave
>
>  
>
> *From:*Simh [mailto:simh-boun...@trailing-edge.com] *On Behalf Of
> *Phil King
> *Sent:* Tuesday, May 8, 2018 12:26 PM
> *To:* simh@trailing-edge.com
> *Subject:* EXT :Re: [Simh] Simh Digest, Vol 172, Issue 4
>
>  
>
> Hello 
>
>  
>
> May i ask for a how to manual for running windows motiff i use all the
> instructions for every body i find and i still get can not display or
> somthing like that i dont have it in front of me.. 
>
> i am using ubuntu 18.04 on both my notebook and my dell server.. The
> dell server is going to be my cluster server as soon as i can get the
> first instance working .. so please let me know. 
>
>  
>
> Phil
>
> thanks in advance
>
>  
>
> Philip King 4681 Carr Rd Hillsboro Ohio 45121 9374421909
> pr...@yahoo.com 
>
>  
>
>  
>
> On Tuesday, May 8, 2018, 12:00:58 PM EDT,
> simh-requ...@trailing-edge.com 
>  > wrote:
>
>  
>
>  
>
> Send Simh mailing list submissions to
>
>     simh@trailing-edge.com 
>
>  
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
>     http://mailman.trailing-edge.com/mailman/listinfo/simh
>
> or, via email, send a message with subject or body 'help' to
>
>     simh-requ...@trailing-edge.com 
>
>  
>
> You can reach the person managing the list at
>
>     simh-ow...@trailing-edge.com 
>
>  
>
> When replying, please edit your Subject line so it is more specific
>
> than "Re: Contents of Simh digest..."
>
>  
>
>  
>
> Today's Topics:
>
>  
>
>   1. Re:  Problems running simH in Android (Mark Pizzolato)
>
>  
>
>  
>
> 

Re: [Simh] Is there a searchable SIMH Archive of postings ?

2018-04-19 Thread Timothe Litt

On 19-Apr-18 17:18, James W. Laferriere wrote:
> Hello ALl ,  Having done a mediocum of google searching for a
> searchable archive of postings .  I ask the question here . 
As noted at the foot of every message on the list:
http://mailman.trailing-edge.com/mailman/listinfo/simh

tells you about

http://mailman.trailing-edge.com/pipermail/simh/

Google searching the archive with:

    search site:mailman.trailing-edge.com/pipermail/simh/

and take the first result, which
is:http://mailman.trailing-edge.com/pipermail/simh/2016-September/015883.html

As to whether it made it to the FAQ...


> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh



signature.asc
Description: OpenPGP digital signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Printer on TOPS-10

2018-03-22 Thread Timothe Litt
Setting up the ANF-10 node number is under networking.  It changes
device naming (e.g. if your host's node number is 16, LPT0 becomes
LPT160).  Virtually all customers had some sort of network, so bugs
(such as this one) crept in.  We tried to keep non-network
configurations working, but bugs did come up.  (Probably because no
in-house system was without a network...)

IIRC, in HWCFG under the LP20, there should be a question like "LPT0
lowercase [YES,NO,PROMPT]:"

On 22-Mar-18 08:22, Quentin North wrote:
> Through some testing, i have confirmed that the resolution of the
> printing for me was adding the /device:lpt0 switch. Im not sure where
> to set up the ANF-10 node number as I don’t have networking enabled on
> my Tops-10 system. I couldn’t see where in MONGEN you enter much about
> the printers. The only question I can see is Include UNIBUS/LP20
> printers (NO,YES,PROMPT): y
>
>
>> On 21 Mar 2018, at 14:33, Timothe Litt <l...@ieee.org
>> <mailto:l...@ieee.org>> wrote:
>>
>> You should not need the /Device.  There may be an issue if you
>> haven't assigned a non-zero ANF-10 node number to the machine - I
>> vaguely remember a bug with that.  You can change the START PRINTER
>> command in SYSTEM.CMD if necessary.  But it's better to assign a node
>> number.
>>
>> Note that the LP64 RAM is upper-case only.  (It will fold lowercase
>> to uppercase.)  If you expect (upper and) lower case output, use
>> LP96.RAM.  LPTSPL will make the right choice if you MONGEN the
>> printer correctly, as will any application that asks for the printer
>> type (or uses the LL pseudo-device).
>>
>>
>> On 21-Mar-18 10:24, Quentin North wrote:
>>> Having fixed the LPFORMS, SYSTEM.CMD and SYSJOB.INI, and enable lp20
>>> in simh, I still couldn’t get it to work until I did the shutdown
>>> and startup as below. Now it prints. Hurrah!
>>>
>>> 14:18:09        Printer 0  -- Not available right now --
>>>
>>> OPR>shutdown printer 0
>>> OPR>  
>>> 14:18:14        Printer 0  -- Shutdown --
>>> OPR>start printer 0/device:lpt0
>>> OPR>  
>>> 14:18:27        Printer 0  -- Startup Scheduled --
>>> OPR>  
>>> 14:18:27        Printer 0  -- Started --
>>>
>>> 14:18:28        Printer 0  -- VFU error --
>>>                 Reloading RAM and VFU
>>>
>>> 14:18:28        Printer 0  -- Loading RAM with 'LP64' --
>>>
>>> 14:18:28        Printer 0  -- Loading VFU with 'NORMAL' --
>>>
>>> 14:18:28  <1>   Printer 0  -- Align Forms and Put Online --
>>>                 Type 'RESPOND  PROCEED' when ready
>>>
>>> OPR>respond 1 proceed
>>>
>>>> On 21 Mar 2018, at 12:43, Timothe Litt <l...@ieee.org
>>>> <mailto:l...@ieee.org>> wrote:
>>>>
>>>>
>>>> On 21-Mar-18 08:24, Jordi Guillaumes Pons wrote:
>>>>>
>>>>> Jordi Guillaumes i Pons
>>>>> j...@jordi.guillaumes.name <mailto:j...@jordi.guillaumes.name>
>>>>> HECnet: BITXOW::JGUILLAUMES
>>>>>
>>>>>
>>>>>
>>>>>> On 21 Mar 2018, at 13:19, Timothe Litt <l...@ieee.org
>>>>>> <mailto:l...@ieee.org>> wrote:
>>>>>>
>>>>>>
>>>>>> On 21-Mar-18 07:02, Jordi Guillaumes Pons wrote:
>>>>>>> Some years ago I wrote a note to myself:
>>>>>>>
>>>>>>> - Enable printing:
>>>>>>>
>>>>>>> 1) Create file SYS:LPFORMS:INI with the following content:
>>>>>>>
>>>>>>> NORMAL:ALL/BANNER:01/HEADER:01/LINES:66/WIDTH:132/TRAILER:01
>>>>>>>
>>>>>>> 2) In OPR: SHUTDOWN PRINTER 0
>>>>>>> 3) In OPR: START PRINTER 0/DEVICE:LPT0
>>>>>>>
>>>>>>>
>>>>>>> I don’t remember what problem I was trying to solve, but right
>>>>>>> now this file exists and printing works. Hope it can help you.
>>>>>>>
>>>>>>>
>>>>>>> Jordi Guillaumes i Pons
>>>>>>> j...@jordi.guillaumes.name <mailto:j...@jordi.guillaumes.name>
>>>>>>> HECnet: BITXOW::JGUILLAUMES
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>> LPFORM.INI tells LPTSPL how to process forms (the paper stock on
>>>>>> which a job is pr

Re: [Simh] Printer on TOPS-10

2018-03-21 Thread Timothe Litt
You should not need the /Device.  There may be an issue if you haven't
assigned a non-zero ANF-10 node number to the machine - I vaguely
remember a bug with that.  You can change the START PRINTER command in
SYSTEM.CMD if necessary.  But it's better to assign a node number.

Note that the LP64 RAM is upper-case only.  (It will fold lowercase to
uppercase.)  If you expect (upper and) lower case output, use LP96.RAM. 
LPTSPL will make the right choice if you MONGEN the printer correctly,
as will any application that asks for the printer type (or uses the LL
pseudo-device).


On 21-Mar-18 10:24, Quentin North wrote:
> Having fixed the LPFORMS, SYSTEM.CMD and SYSJOB.INI, and enable lp20
> in simh, I still couldn’t get it to work until I did the shutdown and
> startup as below. Now it prints. Hurrah!
>
> 14:18:09        Printer 0  -- Not available right now --
>
> OPR>shutdown printer 0
> OPR>  
> 14:18:14        Printer 0  -- Shutdown --
> OPR>start printer 0/device:lpt0
> OPR>  
> 14:18:27        Printer 0  -- Startup Scheduled --
> OPR>  
> 14:18:27        Printer 0  -- Started --
>
> 14:18:28        Printer 0  -- VFU error --
>                 Reloading RAM and VFU
>
> 14:18:28        Printer 0  -- Loading RAM with 'LP64' --
>
> 14:18:28        Printer 0  -- Loading VFU with 'NORMAL' --
>
> 14:18:28  <1>   Printer 0  -- Align Forms and Put Online --
>                 Type 'RESPOND  PROCEED' when ready
>
> OPR>respond 1 proceed
>
>> On 21 Mar 2018, at 12:43, Timothe Litt <l...@ieee.org
>> <mailto:l...@ieee.org>> wrote:
>>
>>
>> On 21-Mar-18 08:24, Jordi Guillaumes Pons wrote:
>>>
>>> Jordi Guillaumes i Pons
>>> j...@jordi.guillaumes.name <mailto:j...@jordi.guillaumes.name>
>>> HECnet: BITXOW::JGUILLAUMES
>>>
>>>
>>>
>>>> On 21 Mar 2018, at 13:19, Timothe Litt <l...@ieee.org
>>>> <mailto:l...@ieee.org>> wrote:
>>>>
>>>>
>>>> On 21-Mar-18 07:02, Jordi Guillaumes Pons wrote:
>>>>> Some years ago I wrote a note to myself:
>>>>>
>>>>> - Enable printing:
>>>>>
>>>>> 1) Create file SYS:LPFORMS:INI with the following content:
>>>>>
>>>>> NORMAL:ALL/BANNER:01/HEADER:01/LINES:66/WIDTH:132/TRAILER:01
>>>>>
>>>>> 2) In OPR: SHUTDOWN PRINTER 0
>>>>> 3) In OPR: START PRINTER 0/DEVICE:LPT0
>>>>>
>>>>>
>>>>> I don’t remember what problem I was trying to solve, but right now
>>>>> this file exists and printing works. Hope it can help you.
>>>>>
>>>>>
>>>>> Jordi Guillaumes i Pons
>>>>> j...@jordi.guillaumes.name <mailto:j...@jordi.guillaumes.name>
>>>>> HECnet: BITXOW::JGUILLAUMES
>>>>>
>>>>>
>>>>>
>>>> LPFORM.INI tells LPTSPL how to process forms (the paper stock on
>>>> which a job is printed).
>>>> The default form is "Normal".  Form names with the same 4 initial
>>>> characters use the same stock; no operator intervention is required
>>>> to change among them. (This is used to allow specifying soft
>>>> parameters, such as the number of banner pages, per job.)  If a job
>>>> requires different stock, the operator is notified.
>>>
>>> IIRC the problem was the print spooler didn’t got started on boot
>>> and a command to tell OPR it had the default form mounted was
>>> required to start printing. Defining LPFORMS.INI avoided that
>>> problem and the print spooler started automatically. Does it make
>>> sense to you?
>>>
>>> Blurred memories also tell me there was some alignement test
>>> involved. After telling OPR the printer had the form mounted it
>>> asked to confirm the form was correctly aligned.
>>>
>>> Doh, memory…
>>>
>>>
>>>
>> Not exactly.  The default form is NORMAL.  The printer is started by
>> OPR; as long as INITIA runs, it will start OPR, which will take
>> SYSTEM.CMD.  SYSTEM.CMD is what configures the galactic components.
>>
>> LPTSPL is started by QUASAR whenever it's needed - that is, a stream
>> is started and there's a job in the queue (or has been recently). 
>> QUASAR maintains the printer state, so LPTSPL doesn't have to stick
>> around when idle.  It's possible that LPTSPL prompts for a form if
>> LPFORM.INI doesn't exist - I believe there's a default LPFOR

Re: [Simh] Printer on TOPS-10

2018-03-21 Thread Timothe Litt

On 21-Mar-18 08:24, Jordi Guillaumes Pons wrote:
>
> Jordi Guillaumes i Pons
> j...@jordi.guillaumes.name <mailto:j...@jordi.guillaumes.name>
> HECnet: BITXOW::JGUILLAUMES
>
>
>
>> On 21 Mar 2018, at 13:19, Timothe Litt <l...@ieee.org
>> <mailto:l...@ieee.org>> wrote:
>>
>>
>> On 21-Mar-18 07:02, Jordi Guillaumes Pons wrote:
>>> Some years ago I wrote a note to myself:
>>>
>>> - Enable printing:
>>>
>>> 1) Create file SYS:LPFORMS:INI with the following content:
>>>
>>> NORMAL:ALL/BANNER:01/HEADER:01/LINES:66/WIDTH:132/TRAILER:01
>>>
>>> 2) In OPR: SHUTDOWN PRINTER 0
>>> 3) In OPR: START PRINTER 0/DEVICE:LPT0
>>>
>>>
>>> I don’t remember what problem I was trying to solve, but right now
>>> this file exists and printing works. Hope it can help you.
>>>
>>>
>>> Jordi Guillaumes i Pons
>>> j...@jordi.guillaumes.name <mailto:j...@jordi.guillaumes.name>
>>> HECnet: BITXOW::JGUILLAUMES
>>>
>>>
>>>
>> LPFORM.INI tells LPTSPL how to process forms (the paper stock on
>> which a job is printed).
>> The default form is "Normal".  Form names with the same 4 initial
>> characters use the same stock; no operator intervention is required
>> to change among them. (This is used to allow specifying soft
>> parameters, such as the number of banner pages, per job.)  If a job
>> requires different stock, the operator is notified.
>
> IIRC the problem was the print spooler didn’t got started on boot and
> a command to tell OPR it had the default form mounted was required to
> start printing. Defining LPFORMS.INI avoided that problem and the
> print spooler started automatically. Does it make sense to you?
>
> Blurred memories also tell me there was some alignement test involved.
> After telling OPR the printer had the form mounted it asked to confirm
> the form was correctly aligned.
>
> Doh, memory…
>
>
>
Not exactly.  The default form is NORMAL.  The printer is started by
OPR; as long as INITIA runs, it will start OPR, which will take
SYSTEM.CMD.  SYSTEM.CMD is what configures the galactic components.

LPTSPL is started by QUASAR whenever it's needed - that is, a stream is
started and there's a job in the queue (or has been recently).  QUASAR
maintains the printer state, so LPTSPL doesn't have to stick around when
idle.  It's possible that LPTSPL prompts for a form if LPFORM.INI
doesn't exist - I believe there's a default LPFORM.INI on the
distribution tapes, and I don't recall running without one in a VERY
long time :-)

Alignment is invoked when the mounted stock changes and /ALIGN is
specified in LPFORM.INI; it's used to match the VFU to the paper - e.g.,
when printing labels, or pre-printed forms (e.g. invoices, checks,
greenbar). 

It is likely that that without LPFORM.INI, LPTSPL conservatively asks
for alignment.  As I said, it's a good idea to have one.

However, the OP wasn't getting that far - the printer status shown is
"not available", indicating that the stream is assigned to a device that
doesn't exist or is assigned to another job.  The most likely cause is
failing to enable it in SimH.

I'm not inclined to read the code to refresh my memory of what happens
without LPFORM.INI - having one is a good idea, and I don't think it
relates to the OP's issue.




smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Printer on TOPS-10

2018-03-21 Thread Timothe Litt

On 21-Mar-18 07:02, Jordi Guillaumes Pons wrote:
> Some years ago I wrote a note to myself:
>
> - Enable printing:
>
> 1) Create file SYS:LPFORMS:INI with the following content:
>
> NORMAL:ALL/BANNER:01/HEADER:01/LINES:66/WIDTH:132/TRAILER:01
>
> 2) In OPR: SHUTDOWN PRINTER 0
> 3) In OPR: START PRINTER 0/DEVICE:LPT0
>
>
> I don’t remember what problem I was trying to solve, but right now
> this file exists and printing works. Hope it can help you.
>
>
> Jordi Guillaumes i Pons
> j...@jordi.guillaumes.name 
> HECnet: BITXOW::JGUILLAUMES
>
>
>
LPFORM.INI tells LPTSPL how to process forms (the paper stock on which a
job is printed).
The default form is "Normal".  Form names with the same 4 initial
characters use the same stock; no operator intervention is required to
change among them. (This is used to allow specifying soft parameters,
such as the number of banner pages, per job.)  If a job requires
different stock, the operator is notified.

The :ALL is a locator; it specifies which printer(s) the line refers
to.  E.g. LPTnnn, or the reserved words 'LOC' or 'REM'.  The switches
define the job format - BANNER is # of job header pages; HEADER is
number of header pages written before each file.  TRAILER is the job
trailer.  The rest in your example are self-explanatory.  There are more
options; see the operator's guide and operator's command language
reference. for more detail.

I don't believe that LPFORM .INI is required, but it's a good idea to
have one.  Mine contains:

NORMAL/BANNER:1/HEADER:1/TRAILER:1/RAM:LP96/VFU:NORMAL
NARROW/BANNER:1/HEADER:1/TRAILER:1/WIDTH:80/RAM:LP96/VFU:NORMAL
LABELS/BANNER:1/HEADER:1/TRAILER:1/WIDTH:100/RAM:LP64/VFU:LABELS
66LINE/BANNER:1/HEADER:1/TRAILER:1/RAM:LP96/VFU:66LINE

With respect to the START command, /DEVICE is not required when writing
to the default printer.  It's used when you have multiple printers or
want to redirect printer output to some other device - usually a magtape
-- e.g. when producing microform.

With remote (ANF-10) printers, it's qualified by /node

SYS:SYSTEM.CMD usually starts the printer; mine contains these lines
related to the printer:
set printer 0 page-limit 2000
start printer 0

The OP's issue is likely that the printer isn't ENABLEd in SimH.



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Printer on TOPS-10

2018-03-19 Thread Timothe Litt
I don't think that my advanced printer code was merged into the master fork.

This is what works for me:

set lp20 enable
set lp20 printer=lp07b
attach -S lp20 form=graybar
image=c:\Users\timothe\Documents\GitHub\Run\Form_BG_Grayscale.jpg
c:\Users\timothe\Documents\GitHub\Run\Spool\SAM.lpt.pdf

You probably just need the "set lp20 enable".

On 19-Mar-18 14:27, Quentin North wrote:
> Possibly a silly question, but does anyone know how to get the printer
> to work on TOPS-10 on simh? I have attached a printer files to the
> printer device but the printer and the spooler fills up with output on
> TOPS-10, but the the printer stays offline in TOPS-10 and I cant find
> out how to get it online.
>
> My startup scripts as follows:
> set tim y2k
> att rp0 disks/t10.dsk
> att lp20 printer.out
> set dz 8b
> att -am dz 2020
> boot rp0
>
> When I get TOPS-10 up and try to print I can’t seem to get it to do
> anything:
>
> OPR>  
> 18:19:42                -- System Queues Listing --
>
> Printer Queue:
> Job Name   Req#    Limit             User
>   --  ---  
>   BCPL         3        6  OPR    [1,2]
> There is 1 job in the queue (none in progress); 6 pages
> OPR>show messages
> OPR>  
> 18:19:49        --No outstanding messages--
> OPR>show status printer
> OPR>  
> 18:19:59                -- System Device Status --
>
> Printer Status:
>   Unit      Status
>     ---
>      0  Not Available
>
> OPR>stop printer 0
> OPR>  
> 18:20:20        Printer 0  -- Stopped --
> OPR>abort printer 0/purge
> OPR>  
> 18:20:30        Printer 0  -- Not Active --
> OPR>start printer 0
> OPR>  
> 18:20:37        Printer 0  -- Already Started --
> OPR>show status printer
> OPR>  
> 18:20:46                -- System Device Status --
>
> Printer Status:
>   Unit      Status
>     ---
>      0  Stopped
>
> OPR>start printer 0
> OPR>  
> 18:20:55        Printer 0  -- Already Started --
>
>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] MOP header specs?

2018-02-23 Thread Timothe Litt

On 23-Feb-18 19:56, Paul Koning wrote:
>
>> On Feb 23, 2018, at 5:28 PM, Johnny Billquist  wrote:
>>
>> ...
>>> That's why a MOP load normally consists of asking for a program - which is 
>>> the secondary loader.  It's the secondary (or tertiary) loader that knows 
>>> how to unpack an image and make additional requests.
>> Hum. Well, that might be, but both will be served through MOP, so it don't 
>> make much difference if it's the primary or secondary boot. They are served 
>> the same way. 
> Almost.  A secondary loader response carries the entire secondary loader in a 
> single message, so the bootstrap only needs to handle one message.  Tertiary 
> and OS loads are expected to take multiple messages.
Yes.
>
> Either way, though, the MOP protocol spec only describes bits on the wire.  
> How those bits are derived from files on the serving host is a host matter.  
Yes
> Since multiple host types support loading various devices such as routers or 
> terminal servers, it seems likely that they share an on-disk format, but if 
> they do, that's a packaging convenience question, not something the MOP spec 
> addresses.  
Yes.

The server doesn't need to understand the file format.  It MAY, but
needn't.  It can simply serve bytes (mapping file offset 0 to memory
offset 0).  This is the trivial answer: load program has the secondary
loader at offset 0; the secondary loader adds its length (typically
rounded up to a disk block) to subsequent load addresses, doing the same
for any tertiary loader.  This puts everything in one file.  Or the
server can decode a complex header.  In any case, the server CAN just
serve bytes.

Alternatively, the server can read a complex file format (a.out, VMS
image header) and send responses that expand demand zero pages, or
decompress an LZW-encoded stream, or keep the loaders in separate files,
- or whatever.

It can access and/or serve different file formats based on the request.

There are lots of variations.  MOP carries bytes (octets) on the wire. 
The server supplies (or in the case, of dump, sinks) the bits. 
Packaging can be very simple, but involve a contract between the
embedded loader(s) and the rest of the data file.  Or the MOP server can
have deep knowledge of the file format, and present it in a canonical
format that the client loader understands.  Or any combination.

Again, (call this violent agreement), MOP carries bits on the wire. 
That's all.  There is no MOP header.  No semantics are associated with
the bits at the MOP level.  There are packaging conventions that involve
contracts with the server and/or the secondary loaders (which may or may
not be embedded).  These are all decisions that are outside the scope of
the MOP protocol, usually to optimize external factors. 

> You can think of MOP as a simple data transfer protocol; the fact that 
> clients use it to load executable bits into memory is not required.  The same 
> is true for TFTP, and there the name of the protocol makes the point 
> explicit.  MOP doesn't say it quite so clearly but it is just as true there.

Yes, MOP is a data transfer protocol.  It's also more: "Maintenance
operation" - we've talked about the data transfer aspects, but it also
has built in the concepts of programs, loaders, selective loads,
providing time, loopback, redundant servers via multicast, even a remote
console service.  It's designed to allow a remote diskless node to be
managed as if it were local: trigger boot, trigger dump, remote console,
complex load and dump sequences.  Even a nod or two to security (as
understood at the time.)  All with heterogeneous architectures.  It is a
fairly complete solution to "I'm in Hawaii, the target is at the north
pole - I need to load/dump/diagnose it without getting cold".

TFTP is a different animal.  It is designed to move files between
hosts.  It's a file (or in a case of true weirdness, an e-mail
destination).  But TFTP really IS Trivial.  It moves a file by name. 
Period.  All you get is "read, write, data, ack and error".   A file is
opened, and data is processed sequentially to EOF.  It has no provision
for transfer address, much less providing time, non-sequential access, 
or the many other luxuries that MOP allows for.  Loading an executable
into memory with TFTP and having it run is not specified in the
protocol.  In fact, anything nontrivial built on TFTP requires the
client to have a more involved contract with the server.  (effectively,
private protocol extensions)  However, TFTP DOES specify a "mode" in the
read/write command, which can cause the bits on the wire to be
translated or repacked.  This is not a critique; while TFTP is often
abused ("extended"), it does do exactly what it sets out to do. 
barebones file transfer, with no optimizations for security or
performance. 

>   paul
>
>



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com

Re: [Simh] MOP header specs?

2018-02-21 Thread Timothe Litt
For VAX/Alpha code from VMS, .sys would be an executable linked
/NOHEADER.  This would be a raw memory image; no ISDs or compression,
just bits.

In this case, it appears to be a SRM console image that was extracted
from an HP firmware update kit.

The header is used by the firmware update utility and by the SRM's show
version (IIRC).

There's no reason for this to be loaded via MOP, at least on real
hardware; SRM contains the MOP client.

FYI, when qemu was working on alpha emulation, they applied some patches
to that image - I have no clue what they do or why.  See
https://lists.gnu.org/archive/html/qemu-devel/2009-03/msg00693.html

On 21-Feb-18 10:47, Timothy Stark wrote:
> Ok I figured out what is boot block in cl67srmrom.sys. There was no
> exe version (only sys files) for SROM on some Alpha firmware iso
> files.  I checked other larger sys files with exe files are almost
> same boot block for MOP loader.  Exe files are raw binaries to can be
> loaded and run by dummy simple loader.  
>



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] VMware "internal network" and VAX mop frames

2018-02-21 Thread Timothe Litt
On 21-Feb-18 09:57, Paul Koning wrote:
>
> I remember some discussions about trouble if you use a wireless LAN as 
> opposed to a wired NIC, but I don't remember any details.
The short answer is that wireless routers assume that the only
worthwhile protocols are IP/ARP & some VPN tunnels, and that a client
has exactly one endpoint.

Many (most?) wireless routers assume that a client has a single MAC
address.  SimH packets will have one/emulated NIC; the host (client)
packets will have another.  The router will keep track of the MAC
address used for associating with the access point (the host's), and
drop any other (on the theory that it saves bandwidth - with one
MAC/client, "anything else is a waste").  The assumption is violated by
SimH. 

Support for non-IP protocol types is also problematic with some wireless
routers.

The first problem could be solved if wireless NICs had a promiscuous
mode (multiple associations could fool the router).  Unfortunately, as
anyone who's ever tried to get a wireless packet trace knows, wireless
NICs that do promiscuous mode are rare (and expensive).  And of those,
ones that support promiscuous transmit are rarer and pricier.

The only practical solution is to tunnel all the SimH frames over an IP
connection to the wired LAN.  (Or, if you are trying wireless simh to
wireless simh, between the wireless host nodes.  I think someone figured
out how to do this, but I've never bothered.




smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] MOP header specs?

2018-02-21 Thread Timothe Litt
On 21-Feb-18 09:32, Paul Koning wrote:
> MOP is a protocol, not a storage spec.  The protocol is defined in
> detail, in the MOP architecture spec, which can be found in various
> collections of DECnet architecture specs.
>
Yes
> I assume a SYS file is a file meant to be downloaded by a MOP server
> on some host.  What the file contents means depends on what the sender
> and/or receiver do with that data; the MOP protocol spec has no
> opinion about it.  So the question actually becomes: what do the MOP
> server on OS x, and/or the MOP client on hardware or OS y, expect to
> see in the header of the file used for MOP request z?
>
Yes.
.SYS is a common extension for files loaded by MOP, though it has other
uses in the DEC multiverse.  I've seen it used for PDP-11 code; for
68000 code (used in terminal servers); for VAX code.

MOP load often gets you a secondary loader, which is capable of
multi-block loads.  And that usually loads a tertiary loader, which can
interpret an image file format.  But this is all architecture specific. 
The server knows nothing about it - it serves bits.  The bits are
specified with NML (NCP) in the MOP load database. The PDP-11 boot
architecture is an appendix in the MOP spec.  But it's not mandatory,
and other clients can (and do) vary.

The client gets whatever bits are sent & executes them.  Well, it's
expected to.  MOP could be used to load a bitmap into a graphics buffer
- it takes bits and puts them into memory.  After that, it's up to the
client.

Tim would do better to tell us the full filename, where he found it, and
what he's trying to do.

A8 might be an offset in a VAX image header, and the rest look like the
IDENT and other fields of a header, but I don't have a reference handy. 
Analyze/Image should confirm that.  And 'file' on *ix is pretty good at
identifying a file format. 

It's not a PDP11 boot block, which usually begins with 0240 (NOP).

Beyond that, I'm not inclined to play a guessing game. 

> paul
>
>> On Feb 20, 2018, at 10:13 PM, Tim Stark > > wrote:
>>
>> Folks,
>>  
>> Does anyone know any documentation provides some information about
>> MOP header in SYS files?
>>  
>> Look at first 256 bytes of SYS files:
>>  
>> : A8 00 30 00 44 00 58 00-00 00 00 00 30 32 30 35 
>> |..0.D.X.0205|
>> 0010: 01 01 00 00 FF FF FF FF-FF FF FF FF 00 00 00 00 
>> ||
>> 0020: 20 00 00 01 00 00 00 00-00 00 00 00 00 00 00 00  |
>> ...|
>> 0030: 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 
>> ||
>> 0040: 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 
>> ||
>> 0050: 00 00 00 00 00 00 00 00-03 4D 4F 50 00 00 00 00 
>> |.MOP|
>> 0060: 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 
>> ||
>> 0070: 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 
>> ||
>> 0080: 04 56 31 2E 30 00 00 00-00 00 00 00 00 00 00 00 
>> |.V1.0...|
>> 0090: 00 00 00 00 00 00 00 00-05 30 35 2D 30 35 00 00 
>> |.05-05..|
>> 00A0: 00 00 00 00 00 00 00 00-10 00 87 15 00 00 00 00 
>> ||
>> 00B0: 80 00 00 00 02 00 00 00-00 00 FF FF FF FF FF FF 
>> ||
>> 00C0: FF FF FF FF FF FF FF FF-FF FF FF FF FF FF FF FF 
>> ||
>> 00D0: FF FF FF FF FF FF FF FF-FF FF FF FF FF FF FF FF 
>> ||
>> 00E0: FF FF FF FF FF FF FF FF-FF FF FF FF FF FF FF FF 
>> ||
>> 00F0: FF FF FF FF FF FF FF FF-FF FF FF FF FF FF FF FF 
>> ||
>>  
>> Thanks,
>> Tim
>> ___
>> Simh mailing list
>> Simh@trailing-edge.com 
>> http://mailman.trailing-edge.com/mailman/listinfo/simh
>
>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] VMware "internal network" and VAX mop frames

2018-02-21 Thread Timothe Litt
I can't think of anything that would fail if the ROM address is the same
as the DECnet address (which is what you're setting up), but no real
hardware could ever have been configured that way.  (It is possible for
software to obtain both, though only one goes on the wire.  One set by
software overrides the ROM, which is globally unique.)

The VAX would not normally have a DECnet MAC address when doing a MOP
boot; the MAC address can't be determined until the SYSBOOT parameters
are read. 

The normal flow would be that the interface announces its ROM MAC
address via MOP, and uses that address when sending LOAD requests.  The
MAC address is changed when the cluster or DECnet driver takes over.

I'm not a VMware user, so they may use different terminology than the
following.

So VMware would need to understand that a MAC address can be changed -
more recent OSs don't set the MAC address, so it could be confused.  I
wouldn't be surprised if it acted like a switch & tried to filter
"unneeded" packets.

You do need to make sure that VMware isn't modifying packets - that is,
you want a bridged configuration, not one where VMware is free to modify
packets (e.g. NAT).  And that VMware isn't setup to isolate VMs. 
"Bridged", "shared LAN", same VLAN  - something like that is what you
want. 

Also, DECnet uses several protocol types (a field in the ethernet
packet).  You will need to make sure that VMware is passing all of
them.  It, or some firewall, may block "unknown" protocol types.  DECnet
(+MOP+VAXcluster's SCS) may be forgotten or blocked. The basic DECnet
protocol types are in the range 60-00 through 60-09; 80-38 through 80-41
& -48 are used by LAT, DTSS & a few others.  Check the Windows firewalls.

Of course, if you run IP, you also need IP, IPv6, ARP, etc - but I
wouldn't expect VMware to have a problem with them.

On 21-Feb-18 06:21, gérard Calliet wrote:
>
> Hello Jean-Claude,
>
> The  MAC address of the emulated VAX machine has the value
> AA-00-04-... which will be the result of the calculated DECNET
> address, and it is the same address for the VMware NIC used.
>
> Cordialement
>
> Gérard Calliet
>
>
> Le 21/02/2018 à 12:03, Jean-Claude Parel a écrit :
>> Hello, gerard
>>
>> What is the MAC address of the VAX machine emulated by SIMH ? Is it
>> different from the MAC address of the VMware VM interface used by SIMH ?
>>
>> Cordialement/Regards
>> 
>>
>> *Jean-Claude Parel*   21, Chemin De La Sauvegarde
>> OpenVMS Architect Ecully, 69132
>> 458601    France
>> Global Business Services     
>> Phone:   +33-4-7218-4095     
>> Home:+33-4-7558-3550     
>> Mobile:  +33-6.7171.0434     
>> e-mail:  jcpa...@fr.ibm.com      
>>
>>
>>
>>
>>
>>
>> From:        "gérard Calliet" 
>> To:        "simh@trailing-edge.com" 
>> Date:        21/02/2018 10:58
>> Subject:        [Simh] VMware "internal network" and VAX mop frames
>> Sent by:        "Simh" 
>> 
>>
>>
>>
>> Hello,
>>
>> It's not specifically a simh question, but I hope someone had experience
>> about the issue.
>>
>> I have a VAX VMS on a SIMH on a Windows VMware instance which uses a
>> dedicated NIC for simh.
>>
>> I have an OpenVMS on an AlphaVM emulator on another Windows VMware
>> instance (on the same VMware server) which uses another dedicated NIC
>> for the emulator.
>>
>> (The dedicated NIC have no associated protocol (for example ip) ).
>>
>> I try a network boot from vax vms simh, which could be served by the
>> emulated OpenVMS alpha emulated. I can see the mop broadcasted frames on
>> the wire, outside on the VMware server, but they don't arrive at the NIC
>> at the host instance of the alpha emulation.
>>
>> I think something is filtered out inside the VMware server between its
>> instances. I don't know more.
>>
>> Thanks,
>>

>> Gérard Calliet
>>
>>
>> ---
>> L'absence de virus dans ce courrier électronique a été vérifiée par
>> le logiciel antivirus Avast.
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_antivirus=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=2AoVEMvcRW1lMiTIMmGiShO4dQKZolvsh1Oz2GpyULA=XJN-harnIlfiy6Nm-UhdbwliUFRRucDv2U-bzwbCYRQ=nz4R8EpTNJBF6xyv0wErluHvMAyk_Q0r9mG5k-b5P7o=
>>
>> ___
>> Simh mailing list
>> Simh@trailing-edge.com
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.trailing-2Dedge.com_mailman_listinfo_simh=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=2AoVEMvcRW1lMiTIMmGiShO4dQKZolvsh1Oz2GpyULA=XJN-harnIlfiy6Nm-UhdbwliUFRRucDv2U-bzwbCYRQ=fET4HvAPyJKsZTmohu6n7eViPn81g7Dm1g00AnRnZdY=
>>
>>
>>
>
>
>
> 
> Avast logo
> 

Re: [Simh] TOPS-10 question

2018-02-21 Thread Timothe Litt
On 21-Feb-18 05:49, Jordi Guillaumes Pons wrote:
>> Or, on a reasonably recent monitor, create a pathological name.
>>
>> .path bcl:=dskf:[66,667]
>>
>> This is per-job, but may be easier as it avoids learning to do a 
>> mongen/monitor build.  You would have to put it in any batch job that runs 
>> bcpl.
>>
> Is that (‘pathological’) the “official” name?
>
> Curiously enough, IBM refers to a similar thing in MVS/zOS as “esoteric”.
>
> Jordi Guillaumes i Pons
> j...@jordi.guillaumes.name
> HECnet: BITXOW::JGUILLAUMES
>
>
Yes.  Simple logical names existed for a very long time.  They provide
an alias for a device, and are associated with that device with .ASSIGN
or .MOUNT commands.  They live in the DDB; a device can have no more
than one.  They don't allow associating a filename or any filesystem
attributes with the name.

Pathological names, stored in funny space, are more recent and much more
powerful.  They provide a search list of one or more device/directories,
can provide filenames and/or filename defaults, and can be specified as
an extension to the "default working directory".  Any number of
pathological names can refer to a single device.

See the PATH. UUO, where the term is used in the documentation.

The PATH. UUO functions are documented, but the path command that
accesses them is not.  This was a budget/political issue with
documentation resources.  The path utility may have been shipped as
"customer supported", but was heavily used internally.

Before you ask: Yes, "funny space" is also a technical term for per-job
(process) executive virtual memory.  Originating with the KI10 hardware,
it's an area of the monitor's address space (32 pages) mapped through
the USER page table.  This means that it is automagically context
switched; the monitor can reference job-specific data at the same exec
virtual address.  Funny space is used to map frequently accessed
context, such as the running program's job data area (a user space page)
and per-job monitor pages (such as the pool from which pathological
names are allocated).   In the KL10, the same effect is achieved with
indirect page pointers.

"Funny" is used in the sense of "unusual", not "humorous".  The effect
is that a monitor virtual address doesn't always refer to the same
memory; the target is context sensitive. 



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] TOPS-10 question

2018-02-20 Thread Timothe Litt
On 20-Feb-18 21:46, Rich Alderson wrote:
>> From: Quentin North 
>> Date: Tue, 20 Feb 2018 22:15:05 +
>> Im trying to get the BCPL compiler on TOPS-10 going and I have the install 
>> CTL
>> file which sets out the following pre-requisites:
>> ; THE FOLLOWING MODS TO THE SYSTEM/CUSPS ARE ESSENTIAL TO THE SMOOTH  ;
>> ; RUNNING OF BCPL PROGRAMS:   ;
>> ; ;
>> ; 1. CHECK THE CODE IN LINK RECOGNISES COMPILER CODE #13 AS REQUIRING ;
>> ;SYS:BCPLIB.REL AS THE DEFAULT LIBRARY. CODE IS IN LINK V2 ONWARD.;
>> ; 2. CHECK COMPIL RECOGNISES THE EXTENSION .BCL AND .BCP AS REQUIRING ;
>> ;THE BCPL COMPILER. COM22C.SCM IS A FILCOM FILE OF NECESSARY MODS.;
>> ; 3. GET A BCPL LIBRARY AREA BCL: ALLOCATED AND KNOWN TO THE MONITOR.
>> Items 1 & 2 are relatively straight forward, but Im a complete novice on
>> tops-10 and cannot seem to find out how to set up a new library as BCL: and
>> known to the monitor. Can anyone familiar with the o/s give me any pointers?
> You have to add an entry to the "Level D GETTAB Table" in COMMOD.MAC and build
> a new monitor.  However, there is a place to do this in the MONGEN dialog, so
> it's not too onerous.
>
> Rich
>
Or, on a reasonably recent monitor, create a pathological name.

.path bcl:=dskf:[66,667]

This is per-job, but may be easier as it avoids learning to do a
mongen/monitor build.  You would have to put it in any batch job that
runs bcpl.



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] pdp11 i/o addressing

2018-02-16 Thread Timothe Litt

On 16-Feb-18 14:51, Clem Cole wrote:
> curmudgeon warning below.
>
> On Fri, Feb 16, 2018 at 11:06 AM, Ethan Dicks  > wrote:
>
>
> I started on a VAX with 2MB of physical memory in a 16MB physical
> address space but with 4GB virtual addresses.  Switching over to the
> PDP-11 was odd from that.
>
>
> ​Sigh... I fear that is a fault of your education.  
>
> If you ask many (younger) programmers what VM was designed to solve
> (particularly those that never memory constrained systems such as you
> get in 8, 12, 16 or 18 bit processors), they will tell you 'So can
> have more addressable memory.'​  The problem is said programmers never
> experienced or learned about overlays.   Conceptually, a PDP-11 can
> allow a lot more than the 64Ks physical limit by 'swapping out' and
> 'overlaying parts' and calling subroutines through 'thunks' [which to
> quote my old friend Paul Hilfinger from page 427 of his book:  /"an
> onomatopoetic reference to the sound made by a pointer as it moves
> instantaneously up and down the stack"/].  A process has be allowed to
> be larger that 64K, but only 64K (128K on seperate I/D systems) in the
> set up memory maps at a time.    If you need to call a subroutine to
> (optionally) bring in the routines and its data into memory if it is
> not already there, and then set up the map to point the routine in
> question.
>
> BTW: If you play with BSD 2.11 or the like, it uses overlays to allow
> programs to grow in size.   This was needed as people started to try
> to move features from 4BSD and later back to the PDP-11.   At this
> point, I believe you must have what was sometimes referred too as
> '17th address bit - i.e. I/D space which gives you 128K bytes of
> mapped in memory at a time.    But you can (with care) let you
> programs grow.   
>
> The point is that VM is a mechanism/to automatically manage
> overlays/.   The implementation of this management gets easier if
> there are more address bits than physical address bit, but that is the
> key item that is happening.
>
You have to pick a frame of reference.  There were overlays before VM. 
But people may have experience VM before overlays, or vice-versa.  It is
true that overlays aren't well known or understood by the younger crowds.

But, your observation that "VM is a mechanism/to automatically manage
overlays/" is an over-simplification.

Overlays were used to compensate for limited virtual address space.  VM
expands the limit.  But, more than that, it does it more efficiently
than overlays.  Overlays generally are implemented in user mode by a
linker (task builder).  They are not visible to the OS, so limit
sharing.  In addition, they force code (in the overlay areas) to be
writable.  And the overlay loader (invoked by thunks) makes
cross-overlay calls considerably more expensive than an ordinary call -
even when the target call is frequent.  They work better for code than
for data; identifying & swapping out writable data in user mode for an
overlay is non-trivial.  (Sane overly systems didn't encourage this.) 
Although there were some attempts to automate generating an overlay
structure, getting reasonable performance requires a lot of analysis and
programmer effort.  This grows (at least) exponentially as the size and
complexity of a program grows and the overlay structure moves from a
partition or two to a tree structure.  And worse with writable overlays.

VM expands the address space *more efficiently*.  The OS knows about it;
sharing is more efficient; code can be read-only.  Physical memory can
be used for caching recently used pages, reducing I/O.  And a programmer
can do less (not to be confused with no) work.

Overlays had a place - but VM does a lot more than automatically manage
them.   It provides much finer granularity than is practical for
overlays, does so more efficiently (at both the application and system
level), and used thoughtfully saves programmer effort.  Overlays are, in
many respects, a "poor man's VM".  When you don't have paging hardware,
overlays can do a  lot.  But at a steep price.  Initially, paging
hardware wasn't cheap, neither was the OS complexity to implement it
well.  And it took a long time to figure out how to do it well.  (Er,
well enough.  We're still learning.)

That said, VM used carelessly can perform much, much worse than an
overlay structure.  An early version of an operating that I won't name
decided that "everything is a page fault".  Including disk I/O.  So to
run a program, you associated it with an address space, and - well. Page
fault reading the start address.  PF reading the instruction at the
start address.  PF reading the operand of the instruction at the start
address It was academically beautiful - but hopelessly
non-performant.  As were programs that figured that since memory is
free, might as well allocate GB sparse arrays.

I remember when VAX's 32-bit VM was supposed to be "infinite" and

Re: [Simh] best way to scan 172 column fanfold 80s printout?

2018-02-11 Thread Timothe Litt

On 11-Feb-18 14:29, Davis Johnson wrote:
> I think what you need is a wide carriage printer with the typical feed
> up through a slot in the bottom, and a camera.
>
> The only working function needed from the printer is form feed.
> Photograph the page that is hanging below the printer, form feed and
> repeat.
>
> Anybody here ought to be able to handle the programming to automate
> this process.
>
> You would need to manually photograph the first page.
>
> The camera would need good depth of field.
>
>
It's not that simple.  You need to deal with at least 2 common vertical
pitches (6 & 8 LPI), and a number of page lengths (and widths).  These
need to be setup per job; not all printers support all these.  Plus,
misalignment (as Al noted, crossing the perforations at the bottom of a
page is quite common).  The OP mentioned that his listings have a hard
crease; this will cause (at least) feed and stacking problems.  Form
feed causes a high-speed slew; this becomes less reliable as the
distance moved increases.  You're proposing an entire page at a time -
which means that the paper will jump off the tractors frequently.[1] 
Old paper is fragile.  Over hundreds of pages, dimensions may not be
stable; it was not uncommon to have to re-adjust TOF after a while. 
There's a fair bit of error detection and recovery to work out.

Lighting is an issue, as is compensating for keystoning and other
misalignments.  Most cameras don't have a standard remote trigger
interface - one of the pointers I provided loads modified firmware into
cameras from one manufacturer to make this work.  If you look at digital
camera reviews, you'll see that the lenses have varying degrees of
artifacts, especially at the edges.  So you need to find and zoom to an
area that's relatively "flat" & doesn't need a lot of correction.  While
depth of field will help, it also will result in apparent font size
changes as paper sways forward and back.  If you stop that, you simplify
the OCR - and don't need as much depth of field.

There are many backgrounds that need to be subtracted for OCR to work. 
(Printer paper was notorious for institutional logos, as well as bars
and other aids to human readers.)  Then there are the other issues
mentioned in my earlier note.

It seems simple, but it is a P.roject.  That's a capital P.  With a lot
of roject to work out.

It's worthwhile, but it's not simple.  It's a pretty interesting
hardware (and software) project.  I don't mean to discourage anyone who
wants to work on it - but you need to go in with eyes open, or you'll
end up very, very frustrated.

Thunderscan tried to scan line by line & retrieve grayscale; the
challenges were piecing together the adjacent lines with pixel
resolution.   The focal distance was constant because the camera was on
a carriage.  The idea here is to capture a page per frame.  So the
registration problems are quite different.  One could try the
thunderscan approach; it would trade one set of problems xxx "challenges
and opportunities" for another.

[1] In my experience, with many brands and models of tractor feed
printers over many years.  Paper handling is really difficult to get right.

> On 02/11/2018 01:17 PM, Al Kossow wrote:
>>
>> On 2/11/18 10:11 AM, Dan Gahlinger wrote:
>>
>>> which is why I wondered what people thought of turning an old DEC
>>> teletype or printer into a scanner, by fixing a camera
>>> to it
>> sounds like a bigger version of the Thunderscan
>> https://www.folklore.org/StoryView.py?story=Thunderscan.txt
>>



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] best way to scan 172 column fanfold 80s printout?

2018-02-11 Thread Timothe Litt
These opportunities keep coming up; lots of us archived paper, which
survives longer than magnetics - and their transports.

These seem to be addressed as one-off projects.  It would be more
efficient if a group of interested people could develop/find a sponsor
for a listing -> code facility.  But that may be just a dream.

Scanning paper efficiently requires an investment.  This would seem to
be something that could best be centralized (or regionalized).  Al
Kossow (chm/bitsavers) has hardware for efficiently scanning manuals,
but I don't know if it handles 11 x 17 (line printer) pages.  But he's
not centrally located - and the really scarce resource is labor.

Scanning code is a bit different from scanning books.  Listings tend to
have headers, footers, (tractor feed holes), notations - in some cases,
assembly code or other columns - separate from the code.  Plus lines
and/or colored bars.  And while the font will be consistent & monospace,
ribbons don't always produce crisp impressions.  They fade; the paper
isn't acid-free; zero and O aren't interchangeable, and spaces matter. 
You want to end up with code that can be compiled - with minimal manual
intervention.  So you will want to be able OCR the result, without a lot
of fixups.  And you need to be able to either select the desired source
code, or reliably post-process to extract it.  So getting every
character (including spaces) right matters, and skew that might be
tolerated in a book becomes a problem with listings.  On the other hand,
if the tractor feed holes haven't been detached, it ought to be possible
to adapt a printer as a pretty good transport.  Printers like the DEC
LA120 have the necessary stepping motors, optical encoders, power, and
are microprocessor controlled.  Some line printers have 4 tractor
drives, which can hold paper flatter than the 2 of serial printers.  But
these are more power hungry and a bit more work to adapt.

In any case, the problem is that building something efficient is a
Project; it would be really useful in the grand scheme of things.  But
for any one recovery, it always seems better to just stumble along with
something ad-hoc. So local optimization, as often is the case, wins over
global.

Here are some starting points from the book world:

https://www.diybookscanner.org
https://arstechnica.com/gadgets/2013/02/diy-book-scanning-is-easier-than-you-think/
https://www.wired.com/2010/04/the-20-diy-book-scanner/
https://makezine.com/projects/make-41-tinkering-toys/diy-book-scanner/
http://scantailor.org
https://www.theverge.com/2012/11/13/3639016/google-books-scanner-vacuum-diy

P.S. I once worked in a very small company; cash was short.  We used
each sheet of printer paper four times, and never burst it.  Front and
back, of course.  But it turns out that most of our listings were
left-skewed.  So turning the paper up-side down and printing the right
side was adequate for working listings.  There was minimal overlap.  I
wouldn't want to scan those :-)

On 11-Feb-18 11:18, Pär Moberg wrote:
> Look at the diy book scanning community for inspiration and make sure
> that the light comes at an angle that doesn't reflect in to the camera.
> I just found a led light fixture that pumps out as lot of light and is
> long as a "tube light" 1,2m (approximately 1,5 yards)
> //Pär 
>
> Den 11 feb. 2018 5:09 PM skrev "Zane Healy"  >:
>
>> On Feb 11, 2018, at 6:55 AM, Dan Gahlinger > > wrote:
>>
>> I have several printouts like this,
>> the one I was just trying to scan in is labelled "EMPIRE Version
>> 4.0 18-Jan-81"
>> with the notice: "Please send bug reports to ELROND::EMPIRE"
>> This is a Vax/VMS Fortran conversion from TOPS-10/20 from sources
>> from around fall 1979
>> It seems I only have the first 95 pages of this printout
>> and it's folded width-wise, making scanning more difficult, old
>> folds are hard to get out.
>>
>> I also have Zork (Vax/VMS) and of course several different
>> iterations of Trek7 (Vms)
>> somewhere I have a copy of Adventure (Colossal Cave) and the
>> "Castle" game I love so much.
>>
>> so I guess question 1: how best to get rid of the folds? my
>> method so far: fold them the other direction and try and fold it
>> out, but so far not much luck
>> and 2: how best to scan 100s of wide fanfold printout pages?
>>
>> I wish someone in Toronto had converted an old teletype and put a
>> camera on it, that would be brilliant!
>>
>> Dan.
>
> The best way might be a piece of glass (to keep the paper flat), a
> copy stand, and a high-MP DSLR.  Lighting in that situation would
> be… interesting.  I’m not sure how much a polarizer on the lens
> would help.  One option might be to put it on a light table, but I
> think that would create an interesting/unreadable mess.  Actually
> less light might 

Re: [Simh] Preservation matters

2018-02-09 Thread Timothe Litt

On 09-Feb-18 16:27, Paul Koning wrote:
>
>> On Feb 9, 2018, at 2:16 PM, Timothe Litt <l...@ieee.org> wrote:
>>
>> This isn't strictly SimH, but it is a related story about the importance of 
>> preservation.
>>
>> For those of you who may not have been following it, here's a story that 
>> emphasizes why preserving computing history matters.  
>> https://go.nasa.gov/2EeF5SO
> Amazing.  We hear stories from time to time about government agencies such as 
> NASA misplacing old tapes, and occasionally those stories may even be true.  
> But this is the first I've heard of them misplacing a satellite.
It wasn't misplaced.  They knew exactly where it was.  It just stopped
talking due to a cosmic ray-induced hardware fault.  NASA stopped
listening for it when they ran out of recovery options.  Long after
that, it woke up (I have a pretty good theory on how and why), and when
someone else heard it the fun began.  How to reconstruct a ground
station that depends on abandoned hardware & software?  Not to mention,
when it's proven healthy, how to find money to operate it and collect
the data?

If you follow the references, NASA did an excellent detailed failure
report and analysis.  If you're at all interested in such things, I
recommend it.

There was an earlier recovery effort for another space veteran, in which
I was (very) peripherally involved.  Search the web for ISEE-3 reboot.

As for missing tapes, I'd really like for someone to find the original
slow-scan TV from Apollo 11.  They seem to have been recycled because
"tape (and storage space) was expensive".  (Heard that before?)

>   paul
>
>



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

[Simh] Preservation matters

2018-02-09 Thread Timothe Litt
This isn't strictly SimH, but it is a related story about the importance
of preservation.

For those of you who may not have been following it, here's a story that
emphasizes why preserving computing history matters. 
https://go.nasa.gov/2EeF5SO

Additional resources:  https://skyriddles.wordpress.com and
https://twitter.com/coastal8049

There's quite a bit more about it on the web - it actually made the news.

The gory details belong in other forums.  But the bottom line is that
"old" software and computing environments (among other things) may
enable an exciting science project to come back to life.  (And at
trivial cost.)

Enjoy.



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] anyone know how to convert/translate turbo pascal to vax pascal?

2018-02-07 Thread Timothe Litt
On 07-Feb-18 18:22, Dan Gahlinger wrote:
> since I did all that work recreating "castle" from the vax to the pc
> using turbo pascal (then free pascal)
> I'd like to "port" it back to the vax,
> but the vms 7.3 pascal compiler doesn't recognize "writeln"
I would be very surprised by this; writeln is in my copy of Jensen & Wirth.
writeln is also in the PASCAL user manual -
http://bitsavers.trailing-edge.com/pdf/dec/vax/lang/pascal/AI-H485E-TE_VAX_PASCAL_User_Manual_Feb87.pdf

See also the language reference manual -
http://pascal-central.com/docs/pascal-refmanual.pdf, which lists WRITELN
as a predeclared procedure.

> and I suspect things like types and records and case statements wont
> work either.
>
> anyone have any ideas?
>
I don't have VAX PASCAL installed.  And I never used Turbo PASCAL.

But I'd start with HELP PASCAL (at the DCL prompt).

Look for a sub-topic like Language, Syntax, and/or RTL.  VAX languages
generally have detailed help.
You'll probably find your answer there. 

Also, you may need /standard:(validation with ANSI or ISO), and or
/old_version depending on what flavor/version of VAX pascal you used. 
See appendix D.

Use the manuals for gory details.

> Dan.
>



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Tops-10 question/help

2018-02-07 Thread Timothe Litt

On 07-Feb-18 15:27, Quentin North wrote:
> Hi all
>
> A bit off topic, but Im not a historical Dec user so don’t know some
> of the basics with Dec OSes and have never oped one before.
>
> I have installed and configured Tops-10 on a simh PDP10.
>
> It is all fine, but when I am creating user accounts I have bumped
> into two minor issues and after wading through some enormous manuals I
> am none the wiser on how to solve. The issues are:
>
> When I set up a new users I get  a message something like: 
> New user PPN: [27,100]
> %REANDF No default project 27 profile found
>
> How do I set up the default profile and where (show mentions [27,%] or
> [%,%] but I don’t understand these PPNs)?
>
> When I login with a user that I have created I don’t seem to get a
> default SSL and dsk: is not assigned to dskb. If I manually assign
> dskb: dsk: that seems to work in that I can direct, but it doesn’t
> seem to just do that.
>
> .login quentin
> Job 3  KS10  TTY1
> %LGNSLE Search list is empty
> 20:05    7-Feb-118   Wednesday
>
> .dir
>
> %WLDSLE Search list empty 
>
> .dir dsk:
>
> %WLDSLE Search list empty dsk:
>
> Thanks for any help.
>
> Quentin
>

The defaults for a group are under the PPN [27,%].  Just create it with
react.  If there is no [27,%], the system-wide default is [%,%].  %
represents a reserved project or programmer number.  It is accepted by
the parser.

The search list for a user is defined by the disks on which quotas are
assigned.  Again, in react.

.r react

react>change [27,200] ; or [27,%], or [%,%]

change>structure-quota dskb inf inf

(I'm not quite sure of the exact syntax; it's been a while since I had
to do this - use '?')

I believe that react is documented in the tops-10 installation guide.

This assumes a relatively recent monitor.  The older react had a more
cryptic UI, and it's own specification in the software notebooks.  I
think it was about 7.02 when the new react was written using GLXLIB. 
The early prototypes had horrible performance due to a homebrew file
structure.  We had university customers with thousands of accounts.  I
converted it to use RMS index files - which upset the RMS group (who
thought RMS-10 would only be used for COBOL), but the performance was
stellar.  So the RMS group was instructed to fix the one bug that this
uncovered.





smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] EXT :Re: DEC Alpha Emulation

2018-02-05 Thread Timothe Litt
On 05-Feb-18 13:36, Clem Cole wrote:
>
>
> ​Point taken, but DEC used the SPD as its primary defense for exactly
> this type of problem.​ It was the 'legal' definition of what was and
> was not allowed.   But as you point out, that behavior does not always
> make for happy customers or sr managers.
>
I started in the field, and consulted with the corporate flying squads. 
The SPDs' value as legal definition was of more interest to lawyers &
junior product managers than to those at the sharp end of the spear. 
Happiness, even at expense above and beyond legal technicalities brought
more business than sticking to the letter of the law.  Unhappiness was
very, very expensive.  I have stories that run both ways...
>  
>
>  The truth is in at least Tru64  (I think is was Feed Knight - Mr.
> SCSI) had code that detect when your SCSI bus was being shared.  
> It would have been easy to add add a side look up to check the
> control being used and if it was not in the official table,
> produce a boot message saying -- "/shared bus with unsupported
> SCSI controller, please remove sharing or replace controller and
> reboot."/
>
>
> ​But I could never get marketing to accept that.​
>
I wish it were that simple.  In this case, Marketing's intuition covered
some technical challenges.  I had many a talk with Fred when I was in
the Tru64 group.  That 'table' would have to deal not only with
controller types, but with compatibility of firmware versions for every
device on the bus.  And the permutations of what worked (and didn't)
weren't static.  The sys_check maintainer made some efforts, as did the
SPEAR folks in CSSE.  But everything was a moving target. 

The trivial case of "don't ever use this controller in a cluster" isn't
all that hard to blacklist.  Of course, when the foobar-plus comes out
with a different device ID, but the same bug, you have to blacklist it
too.  Before any customer finds one a "American Used Computers" (Kenmore
Square, before e-bay:-)  And don't forget that to find another
controller on the bus, you have to enumerate the bus.  This can have
side-effects with "bad" controllers.  The bugs weren't all limited to
fail-over.  IIRC tagging and command queuing had issues; at least one
controller created parity errors (and some undetected one).

But maintaining a useful whitelist - with all the churn in the SCSI
space - would be a nightmare.  Disks have firmware & HW revs. 
Controllers too.  Blocking all 3rd party disks (despite the frequent
firmware issues) isn't viable.  Don't forget CD/DVD, tape, and even
ethernet.  Even getting customers to install patches was hard (patch
quality and interactions was one of my issues); patching to keep up with
hardware/firmware revs wasn't going to fly.  And you need this
information before you have a file system; preferably in the boot
driver.  So no, not a config file.  Maybe SRM console environment
variables...  Even in the relatively controlled environment that DEC was
able to impose, SCSI should have been called CHAOSnet - except that name
was taken.

Worse, once you produce one error message in a problem space (e.g.
invalid HW config), suddenly NOT producing errors for all other cases
that don't work become bugs.

> My point was that if we detected it (which was not not that hard),
> then we could have at least said something.   And in practice if you
> still ignored it and it was in all those system logs, it would have
> been pretty easy to say to the end customer, /we told you not to do that/.
By the time it's in a system log, it's too late.  The logging disk is
probably on the SCSI bus.

"I told you so" - not a happy strategy.

For the simple case of only two machines sharing a bus: what do you mean
by "at boot time"?  The first machine powers up, and is "alone" with a
"good" controller.  Two weeks later, the owner of the second machine
(with a "bad" one) returns from vacation and turns his on.  His dog
brought him a magazine article on clusters, so why not jump in?  It
might, maybe, manage to boot to the point of noticing the first one
without polluting its transfers.  Note that at this point, the first
machine is undoubtedly doing disk writes; packet corruption is not as
"harmless" as when you have a ROFS.  And the second machine has to touch
the first's controller to query it's versions.  And to find it, it
enumerates the entire bus.  Meantime, does the first machine repeat the
boot-time check? How does it notice?

As I said, when something's wrong, logging to disk with an invalid
hardware configuration isn't going fly.  Above the hardware level,
you're not in the cluster (yet), so how are you going to get the disk
bitmaps (and locks)?  And write to a ROFS?  Normally, these are queued
in memory (and retrieved for syslog by dmesg).  But with this
misconfiguration, the last thing you want to do is join the cluster &
remount the logging disk R/W.  So you can't log to disk.  You might want
to try to send to a network syslog - 

Re: [Simh] EXT :Re: DEC Alpha Emulation

2018-02-05 Thread Timothe Litt
On 05-Feb-18 12:01, Clem Cole wrote:
>
>   But marketing never accepted because of the failover issue for
> clusters.
>
> I never understood that.  My argument was that nobody was going to
> *knowingly***put a $1M cluster at risk with a $100 PCI card.   We
> could have just stated in the SPD that the Adaptec chip set was
> supported on single (small) systems such as Workstations, DS10, DS20
> etc...  But I lost that war.
> ᐧ
>
The *word *you left out was probably the issue.  It is trivially easy to
add a workstation to a cluster, and neither VMS nor Tru64 verify that
hardware meets requirements when a node joins a cluster.  So it's not
easy to dismiss the scenario that someone buys a workstation that is not
intended for cluster use; then circumstances change and it turns up in
your cluster.  And it "just works" for a long time, until you hit the
corner case.  In your $M enterprise, stuff gets passed around and
information gets lost as ownership changes at the periphery.  (The way
things moved about on the ZK engineering clusters  is typical.  Despite
attempts to control, people needed to do their jobs & configuration
limits were ignored/fudged.)  *We just didn't make adding a node to a
cluster difficult and mysterious enough.*  Plus, profit is usually a
percentage of user cost.  More cost => more profit.  (Assuming you make
the sale.) 

So product management's conservatism is understandable, given the risk
that the SPD won't be re-read when the function of a node changes, and
the resulting data corruption being laid at DEC's feet.  Engineers
aren't known for reading the instructions - and IT people who are
under-staffed and under pressure less so.  SPDs are even less appealing
- they tend to be read at initial purchase - and subsequently only when
the finger pointing starts.  And that's after customer services has
spent a lot of time and money diagnosing the problem.

These days, we have gates with names like "network admission control";
they won't allow a VPN or Wireless client to connect to a network unless
software is up-to-date.  Something along those lines that also included
hardware and firmware would be a useful addition to clusters - assuming
you could do everything quickly enough to prevent cluster transition
times from becoming unacceptable.  It's non-trivial; the nasty cluster
cases have to do with multi-ported hardware, so you need to check
firmware revisions & bus configurations on all ports for compatibility. 
With all the permutations of the controllers being on stand-alone
systems, cluster nodes not yet joined, joined cluster nodes, and
redundant controllers on the same node.  And interconnects: CI, NI, MC,
DSSI, SCSI.   And hot swap, which can upgrade or downgrade a controller
on the fly.

So, the counter-argument becomes "how much engineering should be
invested in allowing a customer to save $100 on the cost of a PCI
card?"  And the easy answer is one of "none" and "it's not a priority". 
Ship only cluster capable hardware, and "problem solved".  Not all
engineering problems are best solved with engineering solutions.  But
I'll grant that the engineering would be a lot more fun :-)

An imperfect analogy would be selling cars without windshield wipers to
people who promise that they never drive in the rain.  It's in the
nature of things that someday the rain will come.  Or the car will be
passed on.  Of course, missing wipers are a lot more obvious than what
kind and revision of a PCI card is buried in a cardcage :-)

A better analogy is a exercise left to the reader.



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Crowther's Adventure game

2018-02-03 Thread Timothe Litt
IFILE (and OFILE) don't allow specifying a file extension.  In fact,
they only support 5 (or fewer) character file names.  (5 x 7-bit =
36-bit word - don't ask about the extra bit.)  They're a hack to allow
specifying a filename at runtime; earlier, you had to use a hardcoded
name associated with each LUN.

There were two RTLs for TOPS-10 FORTRAN.  F40 originally shipped with
LIB40, also known as FORSE.

It subsequently shipped with FOROTS - the "new" RTL for FORTRAN-10.

LIB40 doesn't support OPEN, which is necessary for control of record
conversion.

FOROTS does.  With F40, one uses CALL OPEN ; with FORTRAN-10, the OPEN
statement.

If you have FOROTS, you can replace the IFILE call with OPEN/CALL OPEN,
as documented for FORTRAN-10.

The fact that the shipped filename is 6 characters indicates that FOROTS
is expected - I don't believe that IFILE was extended for FOROTS, but
OPEN certainly allows a full filespec. 

So it looks like you have a data file intended for use with FOROTS, with
code that someone tried to adapt to use with LIB40 by substituting IFILE
for OPEN and changing the name.

You're going to have to change something.  Either the format (to discard
the carriage control), or the IFILE to OPEN (to tell the RTL discard it
for you).  If it's only read with one format statement, I'd go for that.

You can write portable FORTRAN, but this is a classic example of how
early implementations made it hard at the edges.

On 03-Feb-18 14:17, Lars Brinkhoff wrote:
> Timothe Litt wrote:
>> Is the file extension .DAT? that may trigger this.
> It is.
>
>> Is there an OPEN statement for that file? If so, what does it include?
> There's no OPEN statement.  The input is opened by this:
>
> CALL IFILE(1,'TEXT')
>
> The accompanying file is called ADVENT.DAT.  The IFILE call expects
> TEXT.DAT instead.
>
>> What fortran compiler/runtime? DEC's F40? FORTRAN-10? Something else?
> DEC F40.
>
>> You need the reference manual for whichever compiler/RTL you're using.
> I have the FORTRAN IV (F40) PROGRAMMER'S REFERENCE MANUAL from
> bitsavers.



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Crowther's Adventure game

2018-02-02 Thread Timothe Litt

> Timothe Litt wrote:
>> I may have missed it, but you'll get more help if you provide the OS that
>> you're running on, the file attributes (for DCL, dir/full and/or
>> dump/header), and a hex dump of the first block or two of the file.
> I'm running this in ITS, using the DECUUO TOPS-10 emulator.  So maybe
> it's not surprising if it doesn't work exactly right.  There aren't much
> in the way of file attributes.
>
> This is what the file looks like in Unix.  The linefeeds are converted
> to CRLF when transferred to ITS.
>
>   31 0a 31 09 20 59 4f 55  20 41 52 45 20 53 54 41  |1.1. YOU ARE STA|
> 0010  4e 44 49 4e 47 20 41 54  20 54 48 45 20 45 4e 44  |NDING AT THE END|
> 0020  20 4f 46 20 41 20 52 4f  41 44 20 42 45 46 4f 52  | OF A ROAD BEFOR|
> 0030  45 20 41 20 53 4d 41 4c  4c 20 42 52 49 43 4b 0a  |E A SMALL BRICK.|
> 0040  31 09 20 42 55 49 4c 44  49 4e 47 20 2e 20 41 52  |1. BUILDING . AR|
OK, so it's stream-crlf.  I was never much of an ITS user, and never
used FORTRAN there.  I don't think ITS had file attributes at the
filesystem level, so we don't have to worry about that level of
complication.

If a leading space (1H )  gets the data read correctly, the RTL thinks a
FORTRAN file is expected by the
user code, and is turning the crlf into data.  This behavior
is is either explicit (an OPEN argument), or implicit (based on file
name, unit number, or device type).  Is the file extension .DAT?  that
may trigger this.

Is there an OPEN statement for that file?  If so, what does it include? 
If not, how is it connected to which LUN, and what's its full name (as
seen by a UUO)?

What fortran compiler/runtime?  DEC's F40?  FORTRAN-10?  Something else?

Later versions of F40 supported open (some earlier versions had CALL
OPEN(same args)).

You need the reference manual for whichever compiler/RTL you're using.

For FORTRAN-10, look for aa-n383b-tk ("TOPS-10/TOPS-20 FORTRAN Language
Reference Manual")

There is a bitsavers URL for that, but it's broken and seems broken on
the mirrors too:
http://bitsavers.org/pdf/dec/pdp10/TOPS10_softwareNotebooks/vol11/AA-N383B-TK_fortLangMan.pdf




smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Crowther's Adventure game

2018-02-02 Thread Timothe Litt
Bob's comments agree with mine.

What may be confusing people is that that reading a data file may be
different from a print file.

This depends on the OS.  In the case of VMS, it depends on the file
attributes, which tell RMS whether the file has embedded carriage
control character, FORTRAN carriage control characters, "Print" (PRN =
prefix/suffix implied") carriage control, CRLF-delimited, lines, or
LF-delimited lines, or is an ASCII stream.  In the case of TOPS-10/20, a
.DAT file type implies FORTRAN, everything else is explicit carriage
control; by convention crlf, crff, crvt.

The default (or file attribute-declared) interpretation can be
overridden with OPEN arguments; this was new in F75 or so, though DEC
supported it as an extension earlier.

Any data file that you have has probably been through some form of
conversion.  And depending on any file system attributes, may have been
converted correctly or not.  And even if correctly converted, may have
an incorrect implicit conversion where it is now.  Older programs
probably don't specify OPEN arguments and rely on the defaults.

So, depending on what platform you are on now, you may have to set
attributes or force a conversion.

Note that on VMS (and other platforms), a TYPE (or cat) command may
invoke implicit conversions different from what the FORTRAN RTL invokes.

I may have missed it, but you'll get more help if you provide the OS
that you're running on, the file attributes (for DCL, dir/full and/or
dump/header), and a hex dump of the first block or two of the file.

I know it's odd, but the 'simple' act of writing a record to a file can
be surprisingly complicated and non-portable.

On 02-Feb-18 14:59, Lars Brinkhoff wrote:
>> I see that the data file from which the strings are read do prefix all
>> strings with a space character.  So it looks like the intent is that
>> FORMAT(20A5) should do the right thing.  Maybe the root cause is in
>> the code reading the data file.
> For example the data file contains lines like this:
>
> 42 YOU ARE IN A MAZE OF TWISTY LITTLE PASSAGES, ALL ALIKE.
>
> Right after the number is a TAB character, and then there's a space
> character, and then the text.  "42\t YOU..."
>
> 1004READ(1,1005)JKIND,(LLINE(I,J),J=3,22)
> 1005FORMAT(1G,20A5)
>
> 1G should read a number.  I'm guessing the TAB is a separator here?  In
> that case, 20A5 ought to read the text including the first space
> character.
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Crowther's Adventure game

2018-02-02 Thread Timothe Litt
TYPE( ... )  is just WRITE( 5, ...) , where 5 is the DEC LUN for TTY,
which is opened by default. 

Conversion from format carriage control specifiers to device motion (for
TTYs, character such as CR, LF, FF, VT), is the responsibility of the
RTL (or device driver).  Typically, some OS device characteristic
identifies what happens; an interactive or print device converts to
device actions, while disks record the carriage control.  This can
sometimes be influenced by FORTRAN OPEN statement extensions.

RMS based systems handle the conversion in RMS by specifying the record
mode when a file is (RMS) opened.

'/' in a format is 'end of record'.

Similarly, ACCEPT is just READ(5)

There is no magic beyond what the OS reports to the RTL about the device.


On 02-Feb-18 09:46, Clem Cole wrote:
>
>
> On Fri, Feb 2, 2018 at 9:12 AM, Ken Cornetet
>  > wrote:
>
> I have vague recollections that FORMAT(/) prints a new line
>
> ​Sounds right - I'm O-O-O, but I​'ll try to verify with the compiler
> folks when I'm on the office again. 
>
>
> Format(20A5) takes 20 elements of an array and prints them as
> character stings padded to a width of 5 characters.
>
> ​Right..   -- mAw - means M elements of an input data type (typically
> Integer) as type Alphabet with a width of w.​
>  
>
>
> "TYPE" is not standard fortran. That must have been a DEC
> extension. Standard fortran would have used "write".
>
> ​Yes, TYPE was introduced by DEC with PDP-10 Fortran​ to allowed for
> easier terminal I/O on timesharing (original Fortran was designed for
> batch i.e. LPT, or tape style out).  I believe it was picked up on the
> standard with F90 - but again I'll have to ask the Fortran compiler
> folks.   An example of the difference between TYPE and traditional
> WRITE indeed are things like Fortran Lineprinter control, but I've
> forgotten the details.
>
>  
>
>
> -Original Message-
> From: Simh [mailto:simh-boun...@trailing-edge.com
> ] On Behalf Of Lars Brinkhoff
> Sent: Friday, February 2, 2018 3:41 AM
> To: Dave L  >
> Cc: simh@trailing-edge.com 
> Subject: Re: [Simh] Crowther's Adventure game
>
> Dave L wrote:
> > Been a long time since I wrote fortran but IIRC the first
> character on
> > the output line was to perform carriage-control of the LPT, so you'd
> > have to always have a leading pad character such as a space in order
> > to get the output lines to be correct. Some characters were reserved
> > actions, 1 = FF from memory. I've not looked at the code
> involved but
> > that'd be my first thoughts
>
> Thanks.  Since the SPEAK subroutine is only a few lines, I'll post
> it here.  Maybe someone hows how TYPE, FORMAT(20A5), and FORMAT(/)
> work.
>
>
>
>         SUBROUTINE SPEAK(IT)
>         IMPLICIT INTEGER(A-Z)
>         COMMON RTEXT,LLINE
>         DIMENSION RTEXT(100),LLINE(1000,22)
>
>         KKT=RTEXT(IT)
>         IF(KKT.EQ.0)RETURN
> 999     TYPE 998, (LLINE(KKT,JJT),JJT=3,LLINE(KKT,2))
> 998     FORMAT(20A5)
>         KKT=KKT+1
>         IF(LLINE(KKT-1,1).NE.0)GOTO 999
> 997     TYPE 996
> 996     FORMAT(/)
>         RETURN
>         END
> ___
> Simh mailing list
> Simh@trailing-edge.com 
> http://mailman.trailing-edge.com/mailman/listinfo/simh
> 
> ___
> Simh mailing list
> Simh@trailing-edge.com 
> http://mailman.trailing-edge.com/mailman/listinfo/simh
> 
>
>
> ᐧ
>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] BCPL (was Re: BLISS and C)

2018-01-29 Thread Timothe Litt
On 29-Jan-18 18:07, Bob Eager wrote:
> On Mon, 29 Jan 2018 17:48:22 -0500
> Timothe Litt <l...@ieee.org> wrote:
>
> > On 29-Jan-18 17:45, Dave Wade wrote:
[snip]
> > I seem to remember that there was a BCPL for TOPS-10 in the DECUS
> > library.
>
> I have one here - the Pete Gardner compiler. I did my MSc dissertation
> on a portable version, and indeed found one or two bugs in the TOPS-10
> one!
Please get it on Bitsavers, unless it's already there or there's an
intractable IP restriction.  (Contact Al Kossow)






smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] BLISS and C

2018-01-29 Thread Timothe Litt
On 29-Jan-18 17:45, Dave Wade wrote:
>> -Original Message-
>> From: Simh [mailto:simh-boun...@trailing-edge.com] On Behalf Of Bob
>> Eager
>> Sent: 29 January 2018 22:08
>> To: simh@trailing-edge.com
>> Subject: Re: [Simh] BLISS and C
>>
>> On Mon, 29 Jan 2018 12:05:01 -0500
>> Clem Cole  wrote:
>>
>>> One can argue, why did Ken not just build something more like BCPL
>>> instead of B?  I can not say, maybe the brevity of { } from PL/1 was
>>> more attractive than the Algol BEGIN/END style?
>> BCPL was, in any case, using $( $) and (later) { }. It never used BEGIN/END.
>>
> The "B" compiler I used on the Honeywell L6000/L66 used { }.
>
>> And the major drawback of BCPL (which I love) was that it was word
>> oriented. Most machine architectures were not (OK, PDP-10...) One had to
>> use contortions, and a special % operator, to access bytes efficiently.
>>
> "B" is similar, characters were accessed by functions rather than a special 
> operator, but you can, I think use a combination of shifts and logical 
> operators
> ... those familiar with BCPL or C who have not encountered B may find the 
> manual here interesting...
>
> https://www.bell-labs.com/usr/dmr/www/bref.html
>
> it would be nice to find a working compiler for a word based machine...

I seem to remember that there was a BCPL for TOPS-10 in the DECUS library.





smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] 101 Basic Games for RSTS/E (was Re: PDP11 on Simh for public access)

2018-01-28 Thread Timothe Litt
On 28-Jan-18 18:38, Hunter Goatley wrote:
> On 1/28/2018 3:49 PM, Johnny Billquist wrote:
>
>> It's more or less a dead language, unless you are in a very specific
>> environment. So no, most likely it is not worth learning, if you are
>> thinking that you might work with it.
>
Agree.
> If you're writing code that's strictly for VMS and will never be used
> anywhere else, BLISS is a fine choice, if you're interested in
> learning it.
>
Agree.
>> Compared to C? Well, it is similar, I'd guess/say.
>
> BLISS-32 was designed as an operating systems language, so you can
> easily do things in BLISS that you can't do in C. On VAX, you could
> write subroutines that could be called via JSB instructions in MACRO,
> for example.
>
Generally agree.  But it's not a bright line.

IIRC, DECC added #pragma linkage for that.  But that only matters in
kernel code - any user mode JSB linkage  in the VAX calling standard has
a corresponding CALL linkage. 

But BLISS does it in the language proper; including allocating storage
in specific PSECTs.  And with its macros, it is much easier to do those
sorts of things portably.

The C language standard leaves a lot to the implementers' imagination -
or creative interpretation.  BLISS doesn't.

If I need to access device registers portably, I'll take BLISS over the
varying implementations of C's constant, readonly, and volatile.

> On the other hand, C has the C RTL. BLISS has no RTL, so be prepared
> for lots of calls to LIB$ and friends and system services.
>
Which are problematic/impossible in inner modes.  Then again, the C RTL
for inner modes is a late addition, and has restrictions.

You have to know your environment with either language. 

POSIX C provides a rich user-mode function library. 

BLISS requires that you provide your own.  But in the VMS environment,
that's done for you (see starlet.req, lib.req).  That's richer - but
hardly portable.   Then again, the only other targets are DEC/OSF1,
TOPS-10/20 & PDP-11s.  Which, except for this community, probably aren't
of interest.

If you stick with user mode, the details are different, but the
languages are roughly comparable (especially if you include the XPORT
library for BLISS).

It's all academic unless you are working in one of the supported
environments.

> Hunter



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] 101 Basic Games for RSTS/E (was Re: PDP11 on Simh for public access)

2018-01-28 Thread Timothe Litt

On 28-Jan-18 18:32, Clem Cole wrote:
>
>
> On Sun, Jan 28, 2018 at 4:43 PM, khandy21yo  > wrote:
>
>
> Never had BLISS on anything until long after it would have been
> useful. So how does BLISS compare to C as a systems programming
> language? Is it worth learning at this late date?
>
>
> ​I'll try to answer your questions in verse order - probably not worth
> learning; except for some education value and the ability to read and
> really understand any BLISS code you might come upon (if the later is
> something you really need/want to do).
>
> Armando Stetner, of the TIG (Telephone Industries Group in MKO) once
> made a set of 'BLISS is Ignorance' buttons which he gave to a lot of
> people (I still have mine).   While I loved the language, I loathed it
> too.  ​I'm in a interesting position here, because I learned BLISS
> before I learned C, since I was CMU type at the time and a student of
> Wulf and his wife.
>
> 40 years later, I've written way more C then BLISS.  But as Tim
> was saying there were some things about BLISS which I still miss -
> primarily the macro system and the way conditional compilation was
> handled.   It was much more sane that C's preprocessor; and the PDP-11
> optimizer (discussed in the Green Book) made the Ritchie C compiler
> see almost like a toy.
>
> Remember, part of there design of the language was with software
> engineering in mind.  Parnas et al was publishing and there was a lot
> of thought about what made for good programs.  Hence, no goto. 
> Similarly, it included a macro and conditional compilation system -
> which I think was something that really made BLISS and C much more
> useful than say PASCAL.    In fact, people wrote macro systems like m4
> and RATFOR so that PL/1 and FORTRAN could be conditionally compiled in
> a manner than was reasonable.  I've always said, for really SW
> engineering you need to have it (the problem with C/C++ is that it
> gets abused and some resulting code is worse because of it).
>
> The CMU BLISS compiler had one of my favorite errors of all time BTW.
>   You could use single letter like i, j, and k for loop variables, but
> if your real variable were less than 6 chars, you could get an
> 'unimaginative variable name' warning.  So for real system programs,
> expressions tended to actually have meaning and code was readable and
> easy to understand.
>
> BTW, like C, everything in BLISS is an expression and I think that
> worked well.  Also for the PDP-10 at least, it is had no language
> runtime (by the time of Alpha I think that was not wholly true).  
> There were a ton of associated libraries, but the compiler did
> everything.   C never really quite got to that because the Ritchie
> compiler was much smaller, so Dennis put a lot into the runtime under
> the covers.  Frankly, as a user since you are always using libraries,
> I never saw much of a difference.
>
> BLISS suffered one major design error (which was self inflicted and is
> an example of theory vs. practice) and a number of smaller ones that
> became sort of a death of thousand cuts.
>
> The big issue is the Wulf's choice of a 'store into' and 'contents of'
> operators vs. the traditional 'assignment' and C style pointer
> indirection.  His theory is 100% correct and it made the language much
> cleaner and >>once you understood it<<; much more regular.  C ended up
> with *, &, -> and a dot operator to handle different linguistic items.
>   BLISS is much more compact and from a >>compiler's writers
> standpoint<< mathematically explicit (which is what Bill was of
> course).  The idea was that if the language was consistent it make for
> better programs. The problem is that in practice, humans do not read
> code the same way as a compiler and the BLISS conventions take a lot
> of getting used to.   Plus if you are 'multi-lingual' your brain has
> to switch between the two schemes.   [Bill would later admit privately
> at least, it was great concept that in practice, just did not pan out].
>
> And finally, in the days of the old drum printers, if you ever look at
> printouts you will see a certain amount of 'bouncing' of text in a
> line, caused by the head solenoids firing a little early or late.  
> This means tops and bottoms of characters were often cut off and small
> symbols (like the period) might not be seen at all on the paper
> (although if you looked carefully you might see a small indentation
> from where is was supposed to have been -- I have examples of this
> effect in some old listings BTW).  We used to say, if your program did
> not work, get a pepper shaker and a sponge  then pour a few dots and
> remove a few others, and it would start to work ;-)
>
> On the smaller side, there were things like the N different exits.  
> IIRC Wulf used to say that was a bad idea and he should have supported
> labels and then allowed and 'exit' to got to a label.   The language
> took the Algol BEGIN/END 

Re: [Simh] VMS multinet DHCLIENT/SSH2 configuration problem

2018-01-28 Thread Timothe Litt

On 28-Jan-18 18:29, Jeremy Begg wrote:
> Hi Timothe,
>
>>> Once you get SSH working you may find it's unusable.  On my RPi 3 it
>>> takes the VMS MultiNet SSH server several *minutes* to negotiate the
>>> SSL handshake.  I suspect (without having attempted any diagnosis!)
>>> that this is due to SIMH having to emulate a huge number of VAX floating
>>> point instructions.
>> I suggest running PCA to determine if this is in fact the cause.� Or
>> stop the emulation a few times while "hung" and look at the history buffer.
> Can you just clarify, what is "PCA"?  DEC Performance & Coverage Analyzer
> springs to mind but I'm not sure that's what you're referring to.
Yes.  A DECset tool.
> Thanks,
>
>   Jeremy Begg
>



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] VMS multinet DHCLIENT/SSH2 configuration problem

2018-01-28 Thread Timothe Litt

On 28-Jan-18 18:04, Timothe Litt wrote:
>
> On 28-Jan-18 17:21, Jeremy Begg wrote:
>> Hi TIm,
>>
>>> ...
>>> Also I am figuring out how to set SSH2 terminal server. I successfully
>>> generated SSH2 keys on emulated SIMH VAX system.
>>> ...
>>> I installed SIMH and OpenVMS 7.3 on my new Tinker SoC (Pi clone) with
>>> Armbian OS for 7/24 operation.
>> Once you get SSH working you may find it's unusable.  On my RPi 3 it
>> takes the VMS MultiNet SSH server several *minutes* to negotiate the
>> SSL handshake.  I suspect (without having attempted any diagnosis!)
>> that this is due to SIMH having to emulate a huge number of VAX floating
>> point instructions.
> I suggest running PCA to determine if this is in fact the cause.  Or
> stop the emulation a few times while "hung" and look at the history
> buffer.
>
> If I had to guess, I'd start by looking at where the required
> randomness (for key generation) is coming from.  In most
> implementations, that's the cause of typical hangs like these.  I
> don't know what Multinet is using for a source of randomness - perhaps
> Hunter can shed light on that.
>
> FP in SimH isn't that slow - the VAX format is unpacked, then integer
> (hardware) operations are performed on the "mantissa" and exponent,
> then the result repacked.  These are all integer operations, and
> should be reasonably fast on any processor, even a Pi.  The FP code
> would benefit from a high level of compiler optimization, as there are
> lots of opportunities for inlining.
>
> Note that the PI, and many modern CPUs provide a hardware source of
> randomness (which can be behind /dev/{u,}random).  It may be off by
> default, depending on your distribution.  I don't think that SimH VAX
> uses it, but it might be something to export.  Most older OS would try
> to gather randomness from device timing (interrupt jitter), something
> that SimH alters...  It can take quite a bit of data to get enough
> bits to satisfy randomness (or primality) tests used in key generation.
>
> Anyhow, FP would not be the first place I'd look.
>
>> (Even on my real VAXstation 4000/96 the MultiNet SSH server took up to a
>> minute to negotiate the SSL handshake, and of course the system would
>> "pause" every now and again while the session keys were renogotiated.
This supports the guess that the issue is more likely to be generating
random bits than the speed of FP emulation.

When I stopped paying attention to VMS, there was no hardware RNG
support.  The usual advice - MTH$RANDOM is a uniform software pseudo-RNG
that produces an F-float between 0.0 & 1.0.  It is fine for simulations
(as in statistics, not what SimH does), but not suitable for most
crypto.  Given the seed, the next number is predictable as it's a simple
congruential algorithm.  And there's no good way to initialize the seed
(for crypto).  Time, process ID, etc are guessable.  And these days, the
limited range of an F-Floating mantissa is susceptible to a dictionary. 

Things are actually more complicated than this summary, but the bottom
line is that if it is slow on hardware, it will be slow on SimH.  And
I'd look at what is happening during the hangs.  The root cause is
likely getting enough randomness...

>> I had hoped that in moving from a 100MHz VAX to a 1.2GHz SIMH VAX things
>> might improve, but they went dramatically backwards.)
>>
>> Regards
>>
>> Jeremy Begg
>>
>>   +-+
>>   |VSM Software Services Pty. Ltd.  |
>>   | http://www.vsm.com.au/  |
>>   |-|
>>   | P.O.Box 402, Walkerville, |  E-Mail:  jer...@vsm.com.au |
>>   | South Australia 5081  |   Phone:  +61 8 8221 5188   |
>>   |---|  Mobile:  0414 422 947  |
>>   |  A.C.N. 068 409 156   | FAX:  +61 8 8221 7199   |
>>   +-+
>>
>>



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] VMS multinet DHCLIENT/SSH2 configuration problem

2018-01-28 Thread Timothe Litt

On 28-Jan-18 17:21, Jeremy Begg wrote:
> Hi TIm,
>
>> ...
>> Also I am figuring out how to set SSH2 terminal server. I successfully
>> generated SSH2 keys on emulated SIMH VAX system.
>> ...
>> I installed SIMH and OpenVMS 7.3 on my new Tinker SoC (Pi clone) with
>> Armbian OS for 7/24 operation.
> Once you get SSH working you may find it's unusable.  On my RPi 3 it
> takes the VMS MultiNet SSH server several *minutes* to negotiate the
> SSL handshake.  I suspect (without having attempted any diagnosis!)
> that this is due to SIMH having to emulate a huge number of VAX floating
> point instructions.
I suggest running PCA to determine if this is in fact the cause.  Or
stop the emulation a few times while "hung" and look at the history buffer.

If I had to guess, I'd start by looking at where the required randomness
(for key generation) is coming from.  In most implementations, that's
the cause of typical hangs like these.  I don't know what Multinet is
using for a source of randomness - perhaps Hunter can shed light on that.

FP in SimH isn't that slow - the VAX format is unpacked, then integer
(hardware) operations are performed on the "mantissa" and exponent, then
the result repacked.  These are all integer operations, and should be
reasonably fast on any processor, even a Pi.  The FP code would benefit
from a high level of compiler optimization, as there are lots of
opportunities for inlining.

Note that the PI, and many modern CPUs provide a hardware source of
randomness (which can be behind /dev/{u,}random).  It may be off by
default, depending on your distribution.  I don't think that SimH VAX
uses it, but it might be something to export.  Most older OS would try
to gather randomness from device timing (interrupt jitter), something
that SimH alters...  It can take quite a bit of data to get enough bits
to satisfy randomness (or primality) tests used in key generation.

Anyhow, FP would not be the first place I'd look.

> (Even on my real VAXstation 4000/96 the MultiNet SSH server took up to a
> minute to negotiate the SSL handshake, and of course the system would
> "pause" every now and again while the session keys were renogotiated.
> I had hoped that in moving from a 100MHz VAX to a 1.2GHz SIMH VAX things
> might improve, but they went dramatically backwards.)
>
> Regards
>
> Jeremy Begg
>
>   +-+
>   |VSM Software Services Pty. Ltd.  |
>   | http://www.vsm.com.au/  |
>   |-|
>   | P.O.Box 402, Walkerville, |  E-Mail:  jer...@vsm.com.au |
>   | South Australia 5081  |   Phone:  +61 8 8221 5188   |
>   |---|  Mobile:  0414 422 947  |
>   |  A.C.N. 068 409 156   | FAX:  +61 8 8221 7199   |
>   +-+
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] BLISS ( was Re: 101 Basic Games for RSTS/E (was Re: PDP11 on Simh for public access))

2018-01-28 Thread Timothe Litt

On 27-Jan-18 13:59, Clem Cole wrote:
>
>
> On Fri, Jan 26, 2018 at 4:28 PM, Timothe Litt <l...@ieee.org
> <mailto:l...@ieee.org>> wrote:
>
> I don't think there was any technical reason that the front end,
> IL optimizer, code generators and object generators couldn't have
> been separate sharable libraries - and separately
> patchable/upgradable. 
>
> ​I was under the impression that the shared libraries were just that
> and DEC was pretty tight about if the fix was in the backend, all
> front end languages had to be tested (more in a minute).​
>
I could be wrong - I followed GEM development off and on as a matter of
curiosity, but didn't get into the internals - however, I was under the
impression that GEM was built into the compilers from object libraries,
not linked as sharable images.  In any case, GEM was not exposed or
documented externally.  And I don't recall any language-independent
patch being issued for GEM - common issues resulted in a patch for each
language - however things were packaged.

What was regression tested internally was quite different from what was
released.  BLISS was regression tested daily, but rarely released (even
internally).  IIRC, there were a lot of GEM changes to support C
optimization and FORTRAN parallelization.  (And language changes.)  And
those languages were released fairly frequently to satisfy customers &
benchmarks.  But I tracked progress through the GEM and language
notesfiles, so I may have a skewed view. 

> But I suspect there was marketing (and qualification) pull toward
> hiding the boundaries when packaging.
>
> ​Maybe in marketing (undeserved) but qual was in important.​
>
>  
>
> After all, some 3rd party might have written a backend for a
> non-DEC architecture.
>
> ​Unlikely -  the sad truth is that when both the K folks and Intel
> compiler groups had access to the all the code and the doc (and the
> people that designed it), guess what code base was used..  not GEM.​
>
That's reality - in a different space-time continuum. 

In the original (DEC-centric) STC, those decisions were made from the
point of view of DEC being the center of the universe, and not wanting
DEC's IP to leak onto other architectures.  Either BLISS itself, or
products coded in it.  The same view that didn't license XMI; greatly
restricted BI licenses; and was too little too late with expanding the
Alpha ecosystem.  (Contrast with PDP-11, where every Unibus system came
with a license grant to build a peripheral...)

Similarly, the DECision on BLISS pricing made sense if you looked at
what DEC invested (I think Brender's paper said a team of 16 people
(pre-GEM) and a couple of $M) and what having it was worth to DEC
engineering.  It didn't recognize how customers would value BLISS, or
what adoption by a wider crowd would be worth to DEC in the long run.

In the current STC, well, I saw a lot of NIH in Intel.  I suspect that
what you report amounts to "Why tear up a "perfectly good compiler" to
incorporate technology from a "failed company", when the result isn't
directly marketable?" 

Of course, both share the same defect - a shortsighted world view. 
Which is easy to see a few decades later.

> Grove used to say the DEC (Gem) compiler DNA was being ground up and
> reinserted into the Intel compiler.   To this day the Intel IL is not
> as rich as the Gem IL and it drives a lot of the old DEC team crazy.  
> From what I gather, the closest IL has been what LLVM did, but I
> gather than is still pretty weak for some language structures such as
> FORTRAN (and I believe PL/1).
>
I wouldn't know; it's been a long time since I dabbled in compilers. 
Mostly pre-GEM timeframe.  But I'm not surprised.  GEM was built &
evolved by engineers from the ground up to support multiple languages at
equivalent optimization levels.  Most other ILs start as an internal
tool for one language; when extended, the rule is to make minimal
changes to support each additional language.  This keeps short term
costs down (regressions against and changes to the first language - and
tools), but you lose expressiveness (and optimizations).  And it ends up
being warty and hackish.  But the incremental cost of the next wart/hack
is always less than the cost of rototilling.  There's probably a formal
proof to derive NIH from this observation :-)

Old New England axiom: Never time to do it right; always time to do it over.

Knuth's version: When writing software, do it once to understand the
problem.  Then plan on throwing out what you built, and write it
correctly from scratch.

Neither is put to use in the technology world...at least not often.

> Speaking for myself and my own personal experiences in using the both
> the DEC and Intel tool chains over the years, the common
> back-end/runtime is acute and you see how w

Re: [Simh] 101 Basic Games for RSTS/E (was Re: PDP11 on Simh for public access)

2018-01-28 Thread Timothe Litt
On 28-Jan-18 13:45, Tim Stark wrote:
> Folks,
>
> There is BLISS source codes for TOPS-20 in pdp-10.trailing-edge.com.
>
> There is a free copy of BLISS compilers for VAX and Alpha in Freeware CD 
> dist. 
>
> There is a online version of  The Design of an Optimizing Compiler on CMU 
> website.
>
> I have a question for you.  Does anyone know any documents to learn how to 
> write BLISS codes?
>
>
There is an internal course called BLISS Primer.  It may be on-line; if
not, when I get around to sending more stuff to CHM, I'm sure it will be.

In the TOPS-10 & -20 software notebooks, there are the BLISS-36 Language
Guide, Installation manual, User Guide

TOPS-10 also has the XPORT manual.  XPORT is a library for writing
portable code; including user-mode IO.  Don't know why it's not in the
TOPS-20 set (or maybe I missed it.)

The language guide is pretty readable, if you know another programming
language.

The paper that I referenced earlier gives some context and history.



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] BLISS ( was Re: 101 Basic Games for RSTS/E (was Re: PDP11 on Simh for public access))

2018-01-26 Thread Timothe Litt

On 26-Jan-18 14:45, Paul Koning wrote:
> ent.​  I also do not know what they are doing with the front-ends.  
> One of the more curious front ends of GEM is the Alpha assembler.  I found 
> out about that when doing some Alpha hand-optimizing early on (a handcoded 
> "memcpy with TCP checksum calculation while doing it" if I remember right).  
> It turned out I could write drafts of that code and give it to the assembler 
> with a /OPTIMIZE switch to let the back end take what I wrote and do stuff to 
> it.  It wasn't always right, but it was a neat source of ideas.
Well, not exactly an optimizing assembler.  It sort of looks like one. 
But the real story is that the Alpha port needed to deal with the large
amount of MACRO-32 in VMS.  The solution was to treat MACRO-32 as a
compiled language, and generate a GEM front end for it.  There was a lot
of optimization that was absolutely required if you wanted tolerable
code - e.g. most VAX instructions set condition codes, but they are
rarely tested - and when tested, usually only a subset of those set are
involved in the test.  So tracking condition code generation and
consumption is a big win.  And when you look at address generation,
there's a lot of opportunity for CSE elimination, and other
optimizations.  Then you wanted to schedule the generated code for Alpha
- which is a lot of re-ordering, packing & the like.

Then it was extended for the Alpha instruction set (psect attributes,
instructions, etc).

At this point, you have a compiler for low level language, that happens
to look like assembler.  Externally, perhaps a distinction without a
difference; internally, quite different.  And if you're unlucky enough
to have to do instruction level debug, very different from traditional
assembler.

There also was the argument that you really couldn't (well, shouldn't)
write pure assembler for Alpha because the best scheduling depends on
the implementation (how many execution units, of what sorts & latencies;
predictions, speculations, prefetch; etc.)

PALcode has some unique constraints that do require manual scheduling,
as do some diagnostics.  But it does turn out to be true that Alpha
assembler is best understood (and used) as a low-level compiled
language, not an assembler.

> The only other optimizing assembler I can think of is SOAP, way back in the 
> 1950s.
>
>   paul
>
>



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] BLISS ( was Re: 101 Basic Games for RSTS/E (was Re: PDP11 on Simh for public access))

2018-01-26 Thread Timothe Litt
On 26-Jan-18 14:09, Clem Cole wrote:
>
>
> The other thing to add is there were at least two generations of the
> compilers within DEC that I knew about. 
Yes. 
> Tim you may have know of a third when I was off doing other things.  
> The last (current) is the 'Gem' compilers which was a rewrite to allow
> N font-ends, with Y back-ends.   I thought 'Compatible BLISS' was done
> to create BLISS-36/16/32 (PDP-10, 11, Vax) from the original CMU base;
> but was only targeting BLISS. 
Yes - "Common BLISS", not "compatible".  But Common BLISS also included
a lot of language changes.

> AFAIK, the original Compatible BLISS compiler was developed on the
> PDP-10 and eventually replaced the CMU code. 
Yes - VMS was initially developed under TOPS-20 with the cross tools. 
The developers were happy when it became self-hosted on the VAX, though
they lost a lot in moving from a mature environment.  But pride of self
covers a lot of pain.

The LCG products moved to Common BLISS - FORTRAN, RMS, APL perhaps the
most notable.  One or two might have been left behind because they were
so stable.  But none come immediately to mind.

> Prism forced the rewrite of the back-ends and with it the later
> generation and TLG wanted to clean up its act with a single
> back-end/optimizer that was common for all the languages [hence the
> Gem project - I'd have to ask Rich Grove for the details].  IIRC, Vax
> was used as the base for that system, although it moved to Alpha by
> the mid/late 1990s.
>
Sounds right.  The -16 and -36 versions stayed with the native backend
and didn't get much attention once GEM took off.  At least, I don't
recall GEM support for them.  However, there was minimal change to the
language, so Common BLISS remained common.  (I think the changes were
stuff like architecture-specific PSECT attributes, alignments &
builtins.)  GEM was very successful in consolidating optimization
efforts across all the languages.  It also made it feasible to add
object code generation for various runtime environments for multiple
languages.  Turned an n * m problem into essentially n + m.

Alpha pretty much repeated the VAX route (plus the stupid mistake of
splitting the VMS sources).  It cross-compiled from VAX to simulation,
then the internal early development alpha subset hardware, then alpha. 
But it was a lot easier, since we had real networks and
cross-architecture clusters; you could compile on a VAX, dismount the
disk, and boot on an Alpha ADU in about 30 seconds; later, compile on a
VAX and run user mode on Alpha without dismounting.  OSF/1 was another
story; I wasn't involved in that.

Because of GEM, compiler "generation" gets a bit fuzzy - updates give
BLISS new optimizations and targets (some radically different), but
(almost) all the work is in GEM, not the front end.  But GEM would race
along for a while, but not be incorporated into the released languages
until there was some forcing function.  That could be a long time for
BLISS, but not so long for FORTRAN and C.  So it depends on where you
draw the line - is GEM part of the compiler, or not?  No doubt the
compiler guys can point to examples where GEM changes affected the
language front ends, but from afar it seemed a pretty stable interface. 
I don't think there was any technical reason that the front end, IL
optimizer, code generators and object generators couldn't have been
separate sharable libraries - and separately patchable/upgradable.  But
I suspect there was marketing (and qualification) pull toward hiding the
boundaries when packaging.  After all, some 3rd party might have written
a backend for a non-DEC architecture.

All three (CMU ,BLISS, GEM) back ends used considerable creativity in
interpreting the instruction sets, and as time went on gave hand-coded
assembler a run for its money.  They especially liked to do computations
with bizarre-looking address calculations.  (Not all of which ran fast
on all processors.)  In one case, a particularly "clever" encoding of a
test on a link-time constant broke RMS on an unreleased VAX CPU. 
Interestingly, this one instruction was the ONLY time this construct was
encountered in all of VMS (including the top dozen layered products), so
waiting for the hardware spin wasn't as bad as it might have been.

Every time I do handstands with C #if/#define I still wish for BLISS %if
and %macro.  (I once had to port a few 100K lines of my BLISS code to C
on a 68000, and even with automation, converting macros and keyword
initialized data structures to C was a painful exercise in devolution by
obfuscation...)


> Clem
>
> ᐧ



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] 101 Basic Games for RSTS/E (was Re: PDP11 on Simh for public access)

2018-01-26 Thread Timothe Litt
On 26-Jan-18 15:54, Rich Alderson wrote:
>> Date: Fri, 26 Jan 2018 14:35:18 -0600
>> From: Hunter Goatley <goathun...@goatley.com>
>> On 1/26/2018 2:22 PM, Timothe Litt wrote:
>>> BLISS would have done better in the outside world, except for the 
>>> DECision to price it higher than the market would bear.
>> Indeed! I was fortunate to get access to BLISS in college thanks to 
>> DEC's CSLG program, but it was their second-most expensive compiler 
>> license (after Ada), so virtually no one outside of DEC used it. When 
>> they originally released Alpha, they weren't planning to make the BLISS 
>> compiler available, but I and others worked to try to get DEC to change 
>> that. As I'm sure you know, in the end, they released it with a free 
>> license for both VAX and Alpha (and Itanium), but it was far too late 
>> for most people to have any interest in adopting it. I still do some 
>> BLISS coding, but I'm one of the few that I know of still doing it.
> In fact, when Digital announced the free licensing for BLISS-32 and BLISS-16,
> I immediately got in touch with our contact within Digital (help me out, Tim,
> what was Dick's last name?  the guy who helped XKL get the 36-bit stuff and
> introduced you and me in Marlboro) about getting BLISS-36 released the same
> way.  There may not have been a large market for it, but I wanted to make sure
> that XKL's customers had access if they wanted it.
Dick Greeley.  Former product manager in HPS, by then in the corporate
licensing group.
> Rich
>



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] 101 Basic Games for RSTS/E (was Re: PDP11 on Simh for public access)

2018-01-26 Thread Timothe Litt

On 26-Jan-18 15:12, Johnny Billquist wrote:
> On 2018-01-26 20:26, Clem Cole wrote:
>>
>>
>> On Fri, Jan 26, 2018 at 2:09 PM, Johnny Billquist > > wrote:
>>
>>
>>     Right. As far as I know, BLISS-16 only ran under VMS.
>>
>> Hmm I'd be careful here.   As I understand it,​ Hobbs has implied
>> they did the work on the 10 to start with because at the time TLG was
>> using PDP-10s.   As one of the language designers, I'd believe him. 
>> That said, what saw the light of day as product I can not say, I was
>> not paying attention to that in those days.   Phil or Tim might know.
>
> Looked around some more, and it seems both BLISS-16 and BLISS-32 could
> be run under the PDP-10. Oh well. Never seen or heard about that in
> real life, but I guess it must have existed at one point then.
>
I used both in real life. 

I don't believe either was released externally.

BLISS would have done better in the outside world, except for the
DECision to price it higher than the market would bear.

> Johnny
>



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] BLISS ( was Re: 101 Basic Games for RSTS/E (was Re: PDP11 on Simh for public access))

2018-01-26 Thread Timothe Litt
On 26-Jan-18 11:37, Paul Koning wrote:
>
>> On Jan 25, 2018, at 8:15 PM, Clem Cole  wrote:
>>
>> ...
>> RSTS Basic is a late entry, the language support for it, originally came 
>> from the compiler group which again was originally PDP-10 based (also 
>> remember the PDP-11 BLISS compiler needed a 10 to run it).
> Are you talking about BASIC-PLUS-2?  RSTS BASIC-PLUS dates from 1970, and was 
> written under contract for DEC by Evans, Griffiths & Hart ("EGH").  It is 
> essentially a P-code compiler, to use terminology that didn't appear until 
> later; it doesn't generate any machine code.  And as far as I know, it is not 
> based on any BASIC implementation for any other system.
>
> As for BLISS, there's BLISS-16 and BLISS-11.  One came from Carnegie-Mellon; 
> the other was built at DEC.  Both are cross-compilers, but I don't remember 
> which platform.  PDP-10 for both?  10 for one and VAX for the other? 

I wrote a fair bit of BLISS at various stages of its evolution.  My
recollection is:

BLISS-10 & BLISS-11 came from Wulf & Co at CMU.  BLISS-10 is self-hosted.

BLISS-11 is an evolution of BLISS-10.  There was a PDP10-hosted version
of BLISS-11.  I don't think it was ported to VAX.

BLISS-36,-16,-32,-32E,-64E, MIPS, INTEL, IA64, are DEC's common BLISS -
evolved (and greatly extended) from BLISS-11, but not (really)
source-compatible for non-trivial programs.  "common" means that (with
carefully defined exceptions that can be conditionally compiled), the
same language is accepted by all, and it's possible to write portable
programs.  Including common BLISS itself.  RMS-10/20 is another
non-trivial example - same sources as VAX/RMS.  There are a number of
targets and host environment combinations that are supported.

BLISS-16 is hosted on both PDP-10 and VAX, producing PDP-11 object
code.  I used both.  I didn't encounter an Alpha-hosted version - but it
should have compiled & run there, so it probably existed.  Or was VESTed. 

Most software written in BLISS-10 & -11 was converted to common  BLISS.

There was an attempt at self-hosting BLISS-16, but it failed -
technically, it ran, but there really wasn't enough address space to
make it usable.  Cross-compiling wasn't popular (networks were crude),
so BLISS-16 was not as widely adopted.

For a more complete history, see
https://www.cs.tufts.edu/~nr/cs257/archive/ronald-brender/bliss.pdf



>   paul
>
>



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Custom ROMs on PDP-11 sim

2017-12-16 Thread Timothe Litt

On 16-Dec-17 15:42, Paul Koning wrote:
>
>
>> On Dec 16, 2017, at 6:14 AM, Timothe Litt <l...@ieee.org
>> <mailto:l...@ieee.org>> wrote:
>>
>> On 15-Dec-17 22:14, khandy21yo wrote:
>>> Can't you just load them into ram and *run them* from there?
>>> Rom is just non writable memory.
>>>
>>>
>> He could, except that these ROMs are probably in I/O space, so would need
>> to be part of a simulated device for any code to execute properly[1]. 
>
> It would make sense (and it would be very easy) to add a feature to
> SIMH to load a memory image file into what is nominally ROM in I/O
> space.  It could be made ROM from the program point of view (i.e.,
> it's loaded from the user interface but the CPU can't store there).
>
> paul
>
>
Sure, that's trivial.  But a ROM in a peripheral is going to want to
talk to that peripheral, so to "run the code", as the OP said he wanted
to do, the CSRs need to be implemented too.  ROM or RAM, a memory
location doesn't act like a CSR, so the likely result is that the code
will loop, hang waiting for an interrupt, or do something else strange
when the locations that expects are CSRs don't act like a device.

If the goal is to disassemble, loading into I/O space would ensure that
all the addresses are correct.  But if he really wants to *execute* the
code, it's rather more involved.

Again, ROMs embedded in peripherals (he said a robot) are part of a
device, and the code will expect the other parts to be there and to be
functional.  His device's CSRs aren't emulated.  So this is necessary,
but not sufficient.

If it were just a BOOT rom (e.g. a M9301-xx), your suggestion would
work, as it's separate from the device emulation.   (You can read the
M9301 maintenance and operation manual on bitsavers.)



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Custom ROMs on PDP-11 sim

2017-12-16 Thread Timothe Litt
ill PC-relative address
computation.

    move PC, r0
10$: sub #<10$-CSRBASE>, r0
    bit #100, (r0)

    bic #200, 6(r0) ; Indexed - doesn't save space
or, if CSR3 gets used a lot, spend 3 words to save one/reference
    mov r0, r1
    add  #, r1
    bic  #200, (r1)

The only case that a peripheral ROM wouldn't find its CSRs using the PC
is where it has no code.  (Or, if ITS coder wasn't 'sane' - e.g. forced
the device to a fixed address in IO space.)

And yes, the code must be PIC - that is, the code's references to itself
must be position-independent since the hardware can be installed at
various addresses, and the OS can later map it to one or more VAs.

As for data; all data references (pointers) need to be
position-independent - self-relative or relative to the base of the ROM.

Device base address:
++
| CSR 0
| CSR 1
| ...
| CSR n
| Gap
| Code
++

Exactly how the hardware is implemented may vary.  Usually the CSRs are
below the code, though the converse can be true.  And whether the gap
exists - and how big varies.  Sometimes CSRs overlay the ROM; others the
gap aligns the code to a convenient boundary for address multiplexing -
no sane hardware designer of this period would have included an adder.

(Note that 'gap' here is not the 'gap' used in floating address
assignments; if in floating address space, it must not NXM modulo 10,
per the address assignment rules.)

This brings us back to my original points:

  * To execute code in a peripheral ROM that touches its CSRs, the ROM
must be in I/O space
  * (Almost) certainly, the code will find its CSRs using PC-relative
offsets (addressing and/or math)
  * Thus, if you load the ROM into RAM, it will assign some RAM location
to the CSRs.  This won't work.
  * If the ROM contains code that talks to the device, it is a near
certainty (not a 'risk') that it must be placed in I/O space *and*
that the CSRs are implemented.  E.g. a full device emulation must be
created.
  * The code in the ROM must be PIC, as must any pointers to data.

It is always possible that someone built a device that has a single
fixed Unibus address (e.g. no more than one per system, no address
jumpers).  And that the ROM is never called with memory mapping
enabled.  In that case, all the PIC requirements go away.  However, it
would still require working CSRs, however they're found.  But that would
be a design with severe limitations, and I'd be surprised if an engineer
with any PDP-11 background would have created such a beast.

Before quoting doctrine and casting aspersions, it is best to completely
understand the problem.

>
>
> However, that said, actually writing programs fully PIC on a PDP-11
> takes a little more effort, and many times, people didn't do that, so
> there is a risk that the program really needs to run located on the
> addresses given by the card.
>
> But, as have been said several times now, this is all moot. Without
> the ROM contents, nothing to really do here.
>
>   Johnny
>
> On 2017-12-16 12:14, Timothe Litt wrote:
>> On 15-Dec-17 22:14, khandy21yo wrote:
>>> Can't you just load them into ram and run them from there?
>>> Rom is just non writable memory.
>>>
>>>
>> He could, except that these ROMs are probably in I/O space, so would
>> need
>> to be part of a simulated device for any code to execute properly[1].
>> (And any
>> code in them probably touches the device registers, so you need the
>> device
>> to get anywhere.)  As Mark pointed out, SimH doesn't currently
>> support any
>> devices that way - it does functional emulation of I/O devices.  (It
>> wouldn't
>> be difficult to write such a device emulation if there were a reason
>> to.)
>>
>> However, to disassemble code/view data, they could be loaded into any
>> RAM
>> address & poked at with the SimH console.
>>
>> Some reformatting would be required, since ROMs of that era would
>> typically be
>> byte-wide, with 2 devices/word - e.g. one ROM contains the even bytes,
>> another
>> the odd ones. (There are other organizations.)
>>
>> FWIW, ROMs in I/O devices tend to be one or more of:
>>
>>    * Code for on-board processors (rare in early PDP-11s, but Ethernet
>>  and (t)MSCP boards had them)
>>    * Identifying data for the device (e.g. device type, model, serial,
>>  timing, geometry, etc)
>>    * Bootcode/self-test/primitive driver for the host to execute
>>    * Data for the host (e.g. Fonts or strings)
>>
>> However, as Aaron says that the devices have been erased, it's all moot
>> at this point :-)
>>
>> So that's probably more than you wanted to know...
>>
>> [1] While the code would like

Re: [Simh] Custom ROMs on PDP-11 sim

2017-12-16 Thread Timothe Litt
On 15-Dec-17 22:14, khandy21yo wrote:
> Can't you just load them into ram and run them from there?
> Rom is just non writable memory.
>
>
He could, except that these ROMs are probably in I/O space, so would need
to be part of a simulated device for any code to execute properly[1]. 
(And any
code in them probably touches the device registers, so you need the device
to get anywhere.)  As Mark pointed out, SimH doesn't currently support any
devices that way - it does functional emulation of I/O devices.  (It
wouldn't
be difficult to write such a device emulation if there were a reason to.)

However, to disassemble code/view data, they could be loaded into any RAM
address & poked at with the SimH console.

Some reformatting would be required, since ROMs of that era would
typically be
byte-wide, with 2 devices/word - e.g. one ROM contains the even bytes,
another
the odd ones. (There are other organizations.)

FWIW, ROMs in I/O devices tend to be one or more of:

  * Code for on-board processors (rare in early PDP-11s, but Ethernet
and (t)MSCP boards had them)
  * Identifying data for the device (e.g. device type, model, serial,
timing, geometry, etc)
  * Bootcode/self-test/primitive driver for the host to execute
  * Data for the host (e.g. Fonts or strings)

However, as Aaron says that the devices have been erased, it's all moot
at this point :-)

So that's probably more than you wanted to know...

[1] While the code would likely be PIC, things like references to the
device's registers would also be relative to where the code is loaded. 
Looping on a "done" bit relocated to RAM is likely to be frustrating...

>
> Sent from my Galaxy Tab® A
>
>  Original message 
> From: Aaron Jackson 
> Date: 12/15/17 10:37 AM (GMT-07:00)
> To: Mark Pizzolato 
> Cc: simh@trailing-edge.com
> Subject: Re: [Simh] Custom ROMs on PDP-11 sim
>
> Hi Mark,
>
> It probably does not matter anymore unfortunately. I have a PDP-11 from
> a Unimation PUMA robot, which has a 16x EPROM board in it but no power
> supply. I was hoping to try running what was on them inside a
> simulator. I started dumping them and realised that they have all been
> erased before it was sent to me.
>
> Of course I could have tried installing the card in my PDP-11/73 but I
> thought there might have been an easier way with the simulator.
>
> Never mind, thanks anyway.
>
> Aaron.
>
>
>
>
>
> Mark Pizzolato writes:
>
> > Hi Aaron,
> >
> > On Friday, December 15, 2017 at 7:18 AM, Aaron Jackson wrote:
> >> I am wondering if it is possible to use attach ROM dumps in the
> PDP-11 simh?
> >> I haven't found anything about it in the documentation. If not, I
> suppose it
> >> wouldn't be too hard to modify the bootrom header.
> >
> > The PDP11 simulator (which simulates MANY different PDP11 models)
> doesn't
> > actually use any ROMs and doesn't currently support simulation of
> any cards
> > which user supplied ROMS might have been installed in.
> >
> > What problem are you trying to solve???
> >
> > - Mark
>
>
> --
> Aaron Jackson
> PhD Student, Computer Vision Laboratory, Uni of Nottingham
> http://aaronsplace.co.uk
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh
>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh




smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Just a little bit of weirdness and extreme simh-ing

2017-12-07 Thread Timothe Litt
https://www.ijs.si/software/snprintf/

On 07-Dec-17 18:22, Jordi Guillaumes Pons wrote:
> This:
>
> [BITXOV]$ mcr pdp8
>
> PDP-8 simulator V4.0-0 Beta        git commit id: f1f8c855
> pdp8.ini-9> attach lpt printer.txt
> LPT: creating new file
> pdp8.ini-32> attach ttix 
> $JOBening on port 
>
> $MSG *** DEFINING LOGICALS ***
>
> WELCOME TO OS/8
>
> DEFINED LOGICALS:
>
> - FOCAL: RKA1: (FOCAL EXAMPLES AND BINARIES)
> - GAMES: RKB1: (BASIC GAMES)
> - DSK:   RKB0: (USER PARTITION)
> - SYS:   RKA0: (SYSTEM PARTITION)
>
>
> #END BATCH
>
> .DIR
>
>
>
> ISS   .DA   1           START .BK   1           MODUL .RL   2
> VECSUB.FT   2           BASIC .WS  21           VECSUB.LS   2
> VECMUL.FT   1           PASCAL.PA 364           VECSUB.RL   2
> VECINV.FT   1           ORB   .LD   9           VECINV.LS   2
> GRAVF .FT   2           HELPA .PS   1           VECINV.RL   2
> MODUL .FT   1           BUBBLE.PS   4           VECMUL.LS   2
> TEST  .BA   1           ORB   .LS   8           VECMUL.RL   2
> EARTH .DA   1           ORB   .RL   5           ORB   .MP   2
> ORB   .FT   7           GRAVF .LS   3           OUT   .LS 185
> RUNORB.BI   1           GRAVF .RL   2           START .BI   1
> INIT  .CM   1           MODUL .LS   2           MESSAG.TX   1
>
>   33 FILES IN  642 BLOCKS - 2599 FREE BLOCKS
>
> .COMPILE ORB,ORB                              
> sim> show ver
> PDP-8 simulator V4.0-0 Beta
>     Simulator Framework Capabilities:
>         32b data
>         32b addresses
>         no Ethernet
>         Idle/Throttling support is available
>     Host Platform:
>         Compiler: DEC C V6.4-005
>         Simulator Compiled as C on Dec  7 2017 at 23:50:20
>         Memory Access: Little Endian
>         Memory Pointer Size: 32 bits
>         No Large File support
>         SDL Video support: No Video Support
>         No RegEx support for EXPECT commands
>         OS clock resolution: 10ms
>         Time taken by msleep(1): 10ms
>         OS: OpenVMS VAX V7.3
>         git commit id: f1f8c855       
>
>
> Yep, it’s OS/8 running on simh’ PDP-8  running on VAX/VMS 7.3 running
> on simh MicroVAX 3900 :) Slow as hell (the VAX is running on an ARM
> based SoC) but it works.
>
> Does it compile out of the box? I’m afraid not.
>
> Things I had to do to compile the PDP8 simulator:
>
> - Add /MACRO=“CC_DEFS=__VAX” to the MMS command line. It looks the
> CC_DEFS macro gets emptied somehow and then the compiler complains
> because it does not like /DEF=()
> - Hacked a fast and dirty version of snprintf (which simply ignores
> the “n” part), since that routine is not in the DECC RTL. Oh, those
> were the times. This is the routine:
>
> #include 
> #include 
>
> int snprintf (va_alist)
> va_dcl
> {
>     char *buffer;
>     int size;
>     char *format;
>     va_list p;
>     va_start(p);
>
>     buffer = va_arg(p, char*);
>     size = va_arg(p, int);
>     format = va_arg(p, char*);
>
>     return sprintf(buffer, format, p);
> }                            
>
> Compile it with the /DECC/NAMES=AS_IS qualifiers and insert it into
> the [.BIN.VMS.LIB]simh-nonet-.olb library and the compilation should
> end without problems. 
>
> And voilà! You’ll have a PDP-8 simulator running under a VAX simulator
> in your machine of choice.
>
> Or, you can also run it on real VAX hardware… :)
>
>
>
>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] MicroSD Card for SimH on Raspberry Pi 3

2017-12-06 Thread Timothe Litt
Consider a hard drive if you expect a non-trivial disk I/O load - SD
cards do wear out.

On 06-Dec-17 21:08, khandy21yo wrote:
> If you're just starting off with a pi, it might be easiest to buy a
> kit, which includes all the necessary parts to get started, including
> power supply, case, heat sinks, HDMI cable, and a SIM card preloaded
> with an os. Available on Amazon, and many others. 
>
> Get a 32 or larger card if you want to set up a lot of drives. And the
> pi3 is powerful enough for a lot of other games and stuff. Full Linux
> environment available, including compi,are, web browsers,, ...Fun toy.
>
> If you don't have hdmi display available, get a HDMI to  vga
> converter. Also a usb keyboard and mouse.
>
>
>
> Sent from my Galaxy Tab® A
>
>  Original message 
> From: Shaun McCloud 
> Date: 12/6/17 5:33 PM (GMT-07:00)
> To: simh@trailing-edge.com
> Subject: [Simh] MicroSD Card for SimH on Raspberry Pi 3
>
> Hello,
>
> I have just gotten into SimH and am planning on getting a Raspberry Pi
> 3 for my SimH usage, just to not use up a lot of space on my laptop. 
> What is a good MicroSD card for the Pi 3 and SimH?  Or does it not
> really matter as long as it works fine in the Pi 3 on its own and has
> capacity for what I want to do?
>
> Shaun McCloud, MCDST
>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] C9.io

2017-12-01 Thread Timothe Litt

On 01-Dec-17 17:07, Dan Gahlinger wrote:
> a pi would do it.
> and it's not opening it up
> you open to just one port to just that pi
> for just the pps
>
Not quite that simple.  To expand on what I wrote previously: Typically,
machines inside your router trust each other.  In that case, once on the
Pi (or SimH guest), a user has access to anything else on your internal
network, unless you setup the right firewalls on each of your other
internal machines.  With a little care, it's not hard to setup a subnet
for the Pi & emulated machines that everyone else can distrust.

E.g. You open an ssh port to your Pi.  A user who ssh's to that Pi now
has a local address, and can ssh to your desktop - or browse for
Windoze/NFS shares - or whatever.  So you need to adjust the firewall on
your desktop to be very careful about what it permits from the Pi.  (And
guests on the Pi.)

It takes some thought to setup a moat around the Pi, but isn't
especially hard.  It does require a change in mindset from "everything
inside my router is more trustworthy than the outside" to "inside my
router is no guarantee, and access is case-by-case."

Note that the network stacks on the Guests probably haven't been updated
in a few decades, so while they will interoperate, they may have
exploitable bugs.  (Exploits never completely die...)


> electric cost of a pi is peanuts.
>
> cloud would cost you orders of magnitude more.
>
> hell I run my own servers domain cloud etc
>
>
>
> Sent from my Samsung Galaxy smartphone.
>
>
>  Original message 
> From: Joseph Oprysko 
> Date: 2017-12-01 2:37 PM (GMT-05:00)
> To: Dan Gahlinger 
> Cc: Ray Jewhurst , simh 
> Subject: Re: [Simh] C9.io
>
> Dan, it is easy peasy, but not quite free, as if you want 24/7 access
> to the box, you have to keep the system running 24-7, so electricity
> costs. Plus, I’m planning on having others log in as well, thus I
> don’t want to open up my network like that. That’s why I’m looking for
> a free hosted/Cloud solution. That way someone else can deal with the
> rest of the network security. I do enough of that for work anyway,
> don’t want to have to monitor my home network as thoroughly. 
>
> On Fri, Dec 1, 2017 at 1:12 PM Dan Gahlinger  > wrote:
>
> A Linux box running simh bridged with nat
> Easy peasy and free
>
> Get Outlook for iOS 
> 
> *From:* Simh  > on behalf of Joseph
> Oprysko >
> *Sent:* Friday, December 1, 2017 1:09:37 PM
> *To:* Ray Jewhurst
> *Cc:* simh
> *Subject:* Re: [Simh] C9.io
>  
> Well, running from inside a house and making accessible from the
> outside is easy. But most ot my computers at home generally don’t
> run 24/7. 
>
> Mainly what’s needed for what we both want to be able to do isn’t
> really a shell account on a shared machine, but literally a
> dedicated VM instance, but we need to be able to access that
> instance through a public IP address.
>
> On a home network, a private IP Address (192.168.x.x, 172.x.x.x
> ‘actually I don’t think it’s the whole 172 network’, or a
> 10.x.x.x) it’s easy enough to setup port forwarding to make it
> accessible. But on the Cloud based VM’s, I don’t know if there is
> a way to do it. Well, I know there ARE ways, usually involves
> paying for the instance, an external address, and possibly the
> amount of traffic. 
>
> Actually, I know Bluehost (is it still a thing?) used to  give you
> a VM with public address in combination with their hosting/domain
> name service.  But I’m hoping to find one that will not cost me
> anything. 
>
> On Fri, Dec 1, 2017 at 12:21 PM Ray Jewhurst
> > wrote:
>
> I have been trying to figure out a solution for something
> similar to that. I want to be able to run a PDP-11 outside of
> my house for Fortran development. I would be running it on my
> Android phone. 
>
> On Dec 1, 2017 12:11 PM, "Joseph Oprysko"  > wrote:
>
> Does anyone know if I can use the Cloud9 IDE to host a
> simh System emulation?
>
> I know I’m able to build and execute it in the
> environment, but what I’d really like to achieve is to
> have a system (or several) running on various instances.
> And be able to connect to them from an external IP
> address, I believe I am able to SSH into an instance, or
> access it through the web based IDE. 
>
> An 

Re: [Simh] EXT :Re: C9.io

2017-12-01 Thread Timothe Litt
Xeon etc is probably overkill.

Use a Raspberry Pi.  About 7W under load with a monitor, KB, mouse
w/WiFi active - you don't need a monitor, KB, or mouse once setup.  You
can disable the WiFi. (A couple more watts if you use a magnetic drive,
which I recommend).

One time cost is about $100 once you add a case, power supply & SD card
to the $35 board.

For a reasonable workload, that should suffice and is about as
inexpensive to run as you can get.  Pi 3 is a 64-bit ARM CPU @1.2 GHz
CPU - with 1GB memory, ethernet, WiFi, & bluetooth. (Some  OSs are only
32 bit at the moment.)  You can easily scale up with multiple hosts - it
takes quite a number to reach the price of a Xeon.

If you stick with standard packages, security is pretty much one-time
setup & periodic package updates (which includes the kernel).  As it's
cheap enough to be dedicated to simulation, it's not a disaster if
something bad does happen - as long as anything else on your internal
network distrusts the Pi & its guests.  If you put the emulated OS on
the public network, that's a bigger exposure than the host OS.

If you just provide SSH access, I recommend disabling passwords and
using RSA keys only.  It frustrates the script kiddies, and you don't
have to worry about password quality.

Cloud hosting has its own pitfalls.  I'm not a fan.

Someone mentioned running on a cellphone.  That's tough if you want
remote access because as frequently documented here, WiFi
implementations don't get along with SimH's networking.

Have fun.

On 01-Dec-17 15:09, Hittner, David T [US] (MS) wrote:
>
> You could also look at running a super-efficient 24x7 server at home
> to minimize your electric costs.
>
>  
>
> In my last computer build where I was trying to maximize performance
> with minimal power use, I put together an E3 Xeon Server with ECC
> memory that pulls an average of 35W running a SIMH VM with idle enabled.
>
> It’s all based on buying power efficient equipment. It runs near
> noiseless and cool also.
>
>  
>
> Intel Xeon E3-1275v3 3.5GHz 4C with Integrated graphics (which is fine
> for a server). Max 84W.
>
> 32GB ECC memory (overkill for SIMH, but I do other things with the
> server  :-)
>
> 500GB M.2 NVME SSD
>
> 1TB 2.5” HDD
>
> 5.25” BLU-Ray ODD
>
> 80+ Gold PSU
>
> Windows 10 Pro OS (for host)
>
> VMware Workstation for virtualization, although you could use the
> built in Windows 10 Hyper-V virtualization for free
>
>  
>
> 2.5” disk drives and SSDs pull a lot less power than their rotating
> 3.5” equivalents. So does using Integrated graphics vs. discrete video
> cards if the performance is OK.
>
>  
>
> Your point about having others manage the network security is pretty
> darn valid though !!
>
>  
>
> Dave
>
>  
>
> *From:*Simh [mailto:simh-boun...@trailing-edge.com] *On Behalf Of
> *Joseph Oprysko
> *Sent:* Friday, December 01, 2017 2:37 PM
> *To:* Dan Gahlinger 
> *Cc:* simh 
> *Subject:* EXT :Re: [Simh] C9.io
>
>  
>
> Dan, it is easy peasy, but not quite free, as if you want 24/7 access
> to the box, you have to keep the system running 24-7, so electricity
> costs. Plus, I’m planning on having others log in as well, thus I
> don’t want to open up my network like that. That’s why I’m looking for
> a free hosted/Cloud solution. That way someone else can deal with the
> rest of the network security. I do enough of that for work anyway,
> don’t want to have to monitor my home network as thoroughly. 
>
>  
>
> On Fri, Dec 1, 2017 at 1:12 PM Dan Gahlinger  > wrote:
>
> A Linux box running simh bridged with nat
>
> Easy peasy and free
>
>  
>
> Get Outlook for iOS 
>
> 
>
> *From:*Simh  > on behalf of Joseph
> Oprysko >
> *Sent:* Friday, December 1, 2017 1:09:37 PM
> *To:* Ray Jewhurst
> *Cc:* simh
> *Subject:* Re: [Simh] C9.io
>
>  
>
> Well, running from inside a house and making accessible from the
> outside is easy. But most ot my computers at home generally don’t
> run 24/7. 
>
>  
>
> Mainly what’s needed for what we both want to be able to do isn’t
> really a shell account on a shared machine, but literally a
> dedicated VM instance, but we need to be able to access that
> instance through a public IP address.
>
>  
>
> On a home network, a private IP Address (192.168.x.x, 172.x.x.x
> ‘actually I don’t think it’s the whole 172 network’, or a
> 10.x.x.x) it’s easy enough to setup port forwarding to make it
> accessible. But on the Cloud based VM’s, I don’t know if there is
> a way to do it. Well, I know there ARE ways, usually involves
> paying for the instance, an external address, and possibly the
> 

Re: [Simh] Flushing Printer

2017-09-10 Thread Timothe Litt
On 10-Sep-17 08:14, David or Jan Takle wrote:
> Using the NOVA simulator, after every use of the printer it seems
> necessary to escape to the simH console and DETACH LPT in order to
> flush the rest of the simH buffer to the Windows file. Otherwise, the
> file may be incomplete.
> Is there another way to flush the buffer? By sending a particular
> control character to LPT? Or raising a control line? (NIOP LPT for
> example).
> I would really like to accomplish this under program control rather
> than having to interrupt the simulator manually.
> Thanks,
> ~David

This is a software issue in SimH, not something that code running under
simulation can (or should) fix.

Depending on the simulator, there can be two levels of buffering (plus
whatever the simulated OS does).  One is internal to the emulator -
typically buffering up to a line waiting for paper motion or hardware
buffer full.  The other is the usual C RTL buffering. (You'd expect
_IOLBF, which will flush when the simulator outputs a newline, but since
it's a disk file, it's _IOFBF - fully buffered.  How big that is is
unpredictable.  Plus WIN32 ignores _IOLBF...)  It's not uncommon for the
hardware buffer to be full, as paper motion is (somewhat
counter-intuitively) often determined at the START of a line.  (E.g. the
sequence seen by hardware is text or text, or...) I don't
recall what NOVA does. 

When I rewrote the LPT support for SimH to support PDF a several years
ago, I included code to flush output to the printers when they go idle
(for all output types).  This was done for all the then-existent simulators.

The code was not accepted (there was an infinite stream of 1+ requests &
I ran out of patience), but exists in my SimH fork.  However, I have not
kept it up with SimH evolution; the effort to merge it into current
sources is unknown; it's 4 years old & a quick look shows that fork is
1500 commits behind master...




smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] retargetable assembler

2017-09-06 Thread Timothe Litt
On 06-Sep-17 08:19, Paul Koning wrote:
>> On Sep 5, 2017, at 9:18 PM, Timothe Litt <l...@ieee.org> wrote:
>>
>> It's a heavy lift & overkill, but GCC (gas) can be made to cross-compile 
>> for/from any reasonable machine.  That gives you a complete toolset - but 
>> it's a lot of work.
> The assembler (gas) is separate from the compiler (gcc and friends).  It's a 
> prerequisite for a complete cross-package but you can certainly do a gas for 
> some new architecture without bothering with the compiler.
gdb gives you a disassembler.
> The question is assembler syntax.  If the machine you're after has a standard 
> syntax, then gas is unlikely to help since it uses Unix "as" style syntax.  
> For example, while you can assemble PDP11 programs with gas, they don't look 
> like familiar Macro-11 programs and if you feed it Macro-11 sources it will 
> complain bitterly.
Yes, but presumably this is a bootstrapping exercise - hopefully the
native assembler can be found and used once the simulator runs.

As noted, this isn't the approach I'd take, but tastes (and energy
levels) vary.
>> If it were my project, I'd define some macros in MACRO-11 to create a 
>> cross-assembler, as IIRC Whirlwind has 16 bit wordsize.  MACRO-11 has a 
>> reasonable set of operators and macro pseudo-ops.  Define the Whirlwind 
>> instructions as macros, and you're all set.  People have done this for early 
>> micros - it's not quite native and can be a bit awkward - but it works and 
>> can be put together with minimal effort.  
>>
>> You can output absolute binary from the assembler - or link/task build if 
>> you want psects or libraries.  But with the small memory size, MACRO will do.
>>
>> If you want 32-bit words, there's always MACRO-32 - pretty much the same 
>> macro capabilities.
>>
>> For a host, you can use a simh PDP-11 or VAX - whatever you're comfortable 
>> with.
> Sure, those are good options.  Others mentioned Python to write one from 
> scratch.  That is very easy.  I've written an Electrologica assembler in 
> Python, which didn't take long, and a more limited assembler is probably just 
> a week or two worth of work.
>
> One complication for using Macro-11 is that Whirlwind is one-s complement, so 
> negative numbers will be wrong.
That can be handled with a macro to convert 2's complement to 1's,
including any end-around carry.

>   paul
>
>



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] retargetable assembler

2017-09-06 Thread Timothe Litt

On 06-Sep-17 09:21, khandy21yo wrote:
> Reading the Wikipedia page about Whirlwind, it mentions that the pdp1
> is a direct descendent, so would a pdp1 assembler work? Or a tx0
> Assembler? I don't know if these already exist or not.
>
> Is the pdp1 a transistorized Whirlwind as the Wikipedia article
> suggests? We already have an emulator for that.
>
> Anyway, I remember reading about the tx0, and that they were always
> modifying the instruction set in hardware. For this early machine, was
> there even an official assembler Format? And which character sets did
> it use, probably not ascii.
>
ASCII didn't exist in the whirlwind timeframe; not until mid 1960s. 
IIRC, TX0 used a 5 level code from the Frieden flexowriter.  Probably
similar to Baudot.  Don't know about whirlwind.

yes, the architecture of early machines was fluid...





smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Fwd: retargetable assembler

2017-09-05 Thread Timothe Litt
It's a heavy lift & overkill, but GCC (gas) can be made to cross-compile
for/from any reasonable machine.  That gives you a complete toolset -
but it's a lot of work.

If it were my project, I'd define some macros in MACRO-11 to create a
cross-assembler, as IIRC Whirlwind has 16 bit wordsize.  MACRO-11 has a
reasonable set of operators and macro pseudo-ops.  Define the Whirlwind
instructions as macros, and you're all set.  People have done this for
early micros - it's not quite native and can be a bit awkward - but it
works and can be put together with minimal effort. 

You can output absolute binary from the assembler - or link/task build
if you want psects or libraries.  But with the small memory size, MACRO
will do.

If you want 32-bit words, there's always MACRO-32 - pretty much the same
macro capabilities.

For a host, you can use a simh PDP-11 or VAX - whatever you're
comfortable with.

Disassemblers - it's pretty easy to write a trivial one.  It gets
complicated if you want to tease out subroutines, create a symbol table
& data.  But with only 2KW - a simple dump (e.g. address + octal +
instruction decode  + text decode in fixed columns) probably suffices. 
An editor (like emacs) that will do rectangular cut and paste is likely
less trouble than creating a fancy disassembler.  (Again, due to the
small memory and word size.)

You can invest arbitrary amounts of time to create better tools;
worthwhile for larger machines, but I would go for 'good enough' for
something as small and simple as Whirlwind...

On 05-Sep-17 20:54, Bob Supnik wrote:
> Guy is looking to build a Whirlwind simulator, eventually.
>
> I don't know of any assemblers like that. All the cross-assemblers
> I've seen are purpose built, nowadays mostly in Python.
>
>
>  Forwarded Message 
> Subject: retargetable assembler
> Date: Fri, 1 Sep 2017 10:38:08 -0400
> From: Guy Fedorkow 
> To: Bob Supnik 
>
>
>
> hi Bob,
>   I'm continuing to explore the Whirlwind world, one tiny step at a time.
>   I thought I'd look around for retargetable cross-assemblers and
> disassemblers that might work with the machine's instruction set...
> this must come up in the simh world...  do you have a favorite package?
>
>   Thanks
> /guy fedorkow
>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh



smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Rainbow100

2017-07-21 Thread Timothe Litt
> And the VT240 was very slow. I never saw or used a VT125, so I don't know
> >/how it compared, but it didn't have color, right? /
>
> It did have color. You could connect an external RGB sync on green
> monitor to the BNC connectors at the back of the VT125.  For output on the
> built in black/white display it generated a grayscale signal that was
> mearly overlayed on top of a the VT100 signal. This also meant that on the
> RGB connectors you only had the VT125 graphics. No VT100 output.
>
> The VT125 coprocessor used an 8085 processor. Somehow it intercepted the
> serial line passsing through that strange white connector on the VT100
> board and processed the ReGIS commands.
>
>
IIRC, the VT125 graphics are (surprisingly) faster than the VT240.   The
240 uses a T11; the 125 is a coprocessor 8085 (to the 100's 8080).

The 240 supported Tektronix 4010/4014 graphics in addition to ReGis. 

The VT100 was designed as a flexible platform, with lots of opportunity
(slots and power) for plugin options.  Internally, it looked more like a
computer with "bus" slots than a dedicated terminal.  (But no, the
actual uP bus of the main board  isn't exported - except to the Advanced
Video option.)  The actual implementations looked more like a loosely
coupled distributed system.

The STP (="standard terminal port" - strange white) connector is a
shorting connector.  When no card is inserted, the A and B side contacts
touch, and the serial lines pass through.  When an STP card is inserted,
the connection is mechanically broken; the STP card carries both sides
to the 8085, which creates a logical connection.  The devices negotiate
speed/flow control with control sequences.  The same connector (and
mechanism) is used by the Printer Port option.  Passive taps are also
possible.

The 125 graphics processor is completely independent of the VT100 text
processor - modulo some timing signals.

The resulting intensity signal is combined with (essentially) an analog
mixer; the color signals are separate.  This means that ANSI (VT100)
text and ReGis graphics can be independently overlaid on the screen;
something subsequent terminals (including the VT240) do not support.

The VT105 uses the same scheme, with a different board to produce
monochrome graphics.

All this flexibility came at a cost - multiple boards and connectors.
The VT101/102 are cost-reduced versions that consolidate the hardware
and eliminate the overprovisioning.

The 240 got rid of the Intel uPs for both technical and political
reasons.  The one T11 had to handle everything, including scan
interrupts.  This probably explains why it seemed slow - but I didn't
have to dig into the 240 internals to the same extent as the VT100.

The 240 and subsequent terminals never went as far in providing the
ability to add options as the VT100 did.




smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Rainbow100

2017-07-20 Thread Timothe Litt
On 20-Jul-17 13:47, Hunter Goatley wrote:
> On 7/20/2017 12:31 PM, Timothe Litt wrote:
>>
>> Gigi was sold - and used - primarily as a graphics terminal, though
>> it does have a BASIC interpreter.  It was used on the DECSYSTEM-20
>> and VAX.  There was some software support; Scribe had a driver for it.
>
> My graphics class in college used Gigi terminals connected to a PDP
> 11/44. We were all amazed at what you could do with it. Which was a
> lot, compared to the VT100s we had. That was in 1984 or 1985.
>
I didn't mean used EXCLUSIVELY with the the -20 & VAX.  They were sold
there because of their price point, but would happily talk on any ASCII
RS232 line.  Perhaps your college got a good deal - or a donation.

By 84/85 there were a lot better options for graphics; the VT240(241),
VT340, VAXstation I/II were available, and PCs and Macs all had better
graphics.The DEC items all spoke ReGis, so you could move your
software painlessly. 

GiGi's only unique feature was its BASIC interpreter, which was crippled
by its lack of mass storage and limited (even for its time) memory. 
Given that, a VT125 was a better deal (and IIRC, the graphics was faster).

Still, GiGi was a neat toy, and would have seemed impressive against a
backdrop of VT100s...






smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Rainbow100

2017-07-20 Thread Timothe Litt
Gigi was a follow-on to the VT125, which introduced ReGIS.

Gigi was sold - and used - primarily as a graphics terminal, though it
does have a BASIC interpreter.  It was used on the DECSYSTEM-20 and
VAX.  There was some software support; Scribe had a driver for it.

I don't recall any BASIC software sold for GiGi - which would be
difficult, since GiGi has no mass storage - just two  RS232 ports; one
for host comm and one for either a LA34 printer or tablet.  with 16KB of
(D)RAM (plus screen memory) and  28K of ROM, there really wasn't much
you could do with it beyond its intended use as a terminal.  It was
thought that the EDU market might find a use for BASIC; but it wasn't
much of a thought.

CPU is an 8085 - less capable than the Z80, and not capable of running
any general purpose software - no CP/M, MS/DOS, or anything other than
the internal BASIC interpreter.

Someone sufficiently motivated might have driven one of the audio
cassette drives popular at the time (typically modem style FSK) off a
serial port.  But you'd have had to be very motivated.  GiGi wasn't
priced for the hobbyist.

I don't believe it predates the Robin - it was in 1982 (quite the year
for DEC PCish devices).

Again, I wouldn't classify it as a general purpose micro due to the
inability to load/save a program & the lack of software.


On 20-Jul-17 12:17, Johnny Billquist wrote:
> Timothe gives a lot of good info here.
>
> In addition, you also have the DEC GIGI, which I believe predates the
> Robin, and which I think also definitely would be classified as a
> "micro".
>
>     Johnny
>
> On 2017-07-20 18:06, Timothe Litt wrote:
>> On 19-Jul-17 23:23, Bill Cunningham wrote:
>>> There's no simulator for DEC's first micro is there? Will there
>>> ever be one?
>>>
>>> Bill
>>>
>>>
>> That wouldn't be the Rainbow.
>>
>> There was the Harris/Intersil pdp-8 on a chip c.a 1975.
>>
>> The DEC/WD LSI11 c.a. 1976 followed.
>>
>> All these were in embedded systems.  The LSI-11 (and especially its
>> follow-ons, the T/F/J11) were used in a number of DEC's storage and
>> communications controllers, until ultimately replaced by VAXes.  (Yes,
>> your VAX probably had more VAXes in the IO subsystem than you knew
>> about.)  They were also very popular for third party embedded systems -
>> from volume copiers to airport landing lights.
>>
>> If by 'micro', you mean general purpose consumer packaged Intel
>> architecture machine, that would be the Robin (VT180), which is a Z80
>> CPU with dual 5 1/4 inch floppies, as a plugin board for the VT100.
>> CP/M.  Produced in the AD group, which Bob Glorioso managed at the
>> time.  Released c.a. 1982.  The board had its origin as a model railroad
>> controller created as a hobby project by an engineer in that group, and
>> was brought in and adapted for the VT180 as a quick time-to-market
>> product.  (I subsequently subsequently re-adapted the board for
>> something completely different - and learned the history a few years
>> later.)
>>
>> The VT103 used the same idea, but with an LSI-11 backplane and T11 -
>> TU58 tapes & RT11.  But it was later, and not on the IA path.
>>
>> The Rainbow was the replacement for the VT180 (c.a. late 82/early 83),
>> used RX50 diskettes and optionally, a st506 winchester drive.  It was
>> part of the triplet of machines, which also included the Pro 350 (pdp11)
>> and DECmate (PDP-8), that Ken Olsen pushed as the answer to the "cheap,
>> poorly engineered" IBM PC.  Besides being over-designed for the market,
>> all three suffered from being closed systems with hardware architectures
>> different enough from the standards (IBM PC/QBus/Omibus) to disable
>> commodity software.  Especially the Pro350, with its lobotomized P/OS
>> operating system (RSX with a horrible GUI) and limited menu of
>> application software.  (Eventually, RT was released, but too little, to
>> late.)  The DECmate never pretended to be anything other than a word
>> processor.  Ken's belief that quality would overcome price in this
>> market turned out to be very wrong.  And locking out existing software
>> made them niche products.
>>
>> Both the Rainbow and Pro got minor upgrades, then died.  The DECmate was
>> the most successful of the three in that it did exactly what it set out
>> to do; no more and no less.  It got larger winchester drives and some
>> minor software updates, but basically kept chugging along until
>> technology - Apple, WordPerfect (and eventually Word) - provided
>> bitmapped fonts.  (But lost the gold-key UI in favor of the mouse...)
>>
>> I don't think there

  1   2   3   >